Best practice for having separate clone data for development purposes?

Hi
I am on a hosted Apex environment
I have a workspace containing two instances/ copies of the application: DEV and PROD
I would like to be able to develop functionality and data in/ with the DEV instance and then insert it into DEV.
I gather that I can insert pages from DEV to PROD via Create -> New page as copy -> Page in another application
But I don't know how I can mimic this process with database objects, eg. if I want to create a new table or manipulate the data in an existing table in a DEV environment before implementing in a PROD environment.
Ideally this would be done in such a way that minimises changing table names etc when elevating pages from DEV to PROD.
Would it be possible to create a clone schema that could contain the same tables (with the same names) as PROD?
Any tips, best practices appreciated :)
Thanks

Hi,
ideally you should have a little more separation between your dev and prod environments. At the minimum you should have separate workspaces each addressing separate schemas. Apex can be a little difficult if you want to move individual Apex application objects, such as pages, between applications (a much requested improvement), but this can be overcome by exporting and importing the whole application. You should also have some form of version control/backup of export files.
As far as database objects go, tables etc, if you have tns access to your hosted environment, then you can use SQL Developer to develop, maintain and synchronize between your development and production schemas and objects in the different environments should have identical names. If you don't have that access, then you can use the Apex SQL Workshop features, but these are a little more cumbersome than a tool like SQL Developer. Once again, scripts for creating and upgrading your database schemas should be kept under some sort of version control.
All of this is supposing your hosting solution allows more than one workspace and schema, if not you may have to incur the cost of a second environment. One other option would be to do your development locally in an instance of Oracle XE, ensuring you don't have any version conflicts between the different database object features and the Apex version.
I hope this helps.
Regards
Andre

Similar Messages

  • Best practices for development / production environments

    Our current scenario:
    We have one production database server containing the APEX development install, plus all production data.
    We have one development server that is cloned nightly (via RMAN duplicate) from production. It therefore also contains a full APEX development environment, and all our production data, albeit 1 day old.
    Our desired scenario:
    We want to convert the production database to a runtime only environment.
    We want to be able to develop in the test environment, but since this is an RMAN duplicated database, every night the runtime APEX will overlay it, and the production versions of the apps will overlay. However, we still want to have up to date data against which to develop.
    Questions: What is best practice for this sort of thing? We've considered a couple options:
    1.) Find a way to clone the database (RMAN or something else), that will leave the existing APEX environment intact? If such is doable, we can modify our nightly refresh procedure to refresh the data, but not APEX.
    2.) Move apex (in both prod and dev environments) to a separate database containing only APEX, and use DBLINKS to point to the data in both cases. The nightly refresh would only refresh the data and the APEX database would be unaffected. This would require rewriting all apps to use DBLINKS though, as well as requiring a change to the code when moving to production (i.e. modify the DBLINK to the production value)
    3.) Require the developers to export their apps when done for the day, and reimport the following morning. This would leave the RMAN duplication process unchanged, and would add a manual step which the developers loath.
    We basically have two mutually exclusive requirements - refresh the database nightly for the sake of fresh data, but don't refresh the database ever for the sake of the APEX environment.
    Again, any suggestions on best practices would be helpful.
    Thanks,
    Bill Johnson

    Bill,
    To clarify, you do have the ability to export/import, happily, at the application level. The issue is that if you have an application that consist of more than a couple of pages, you will find yourself in a situation where changes to page 3 are tested and ready but, changes to pages 2, 5 and 6 are still in various stages of development. You will need to get the change for page 5 in to resolve a critical production issue. How do you do this without sending pages 2, 5 and 6 in their current state if you have to move the application all at once??? The issue is that you absolutely are going to need to version control at the page level, not at the application level.
    Moreover, the only supported way of exporting items is via the GUI. While practically everyone doing serious APEX development has gone on to either PL/SQL or Utility hacks, Oracle still will not release a supported method for doing this. I have no idea why this would be...maybe one of the developers would care to comment on the matter. Obviously, if you want to automate, you will have to accept this caveat.
    As to which version of the backend source control tool you use, the short answer is that it really doesn't matter. As far as the VC system is concerned, you APEX exports are simply files. Some versioning systems allow promotion of code through various SDLC stages. I am not sure about GIT in particular but, if it doesn't support this directly, you could always mimic the behavior with multiple repositories. Taht is, create a development repository into which you automatically update via exports every night. Whenever particular changes are promoted to production, you can at that time export form the development repository and into the production. You could, of course, create as many of these "stops" as necessary to mirror your shop's SDLC stages, e.g. dev, qa, intergation, staging, production etc.
    -Joe
    Edited by: Joe Upshaw on Feb 5, 2013 10:31 AM

  • Best practice for developing with CRM 2013 (On Premises)

    Hello all.  I'm just starting to work with CRM, and I have some questions that hopefully will be simple for the seasoned developers.  It's mostly just some best practice or general how-to questions for the group.
    - When creating a new Visual Studio CRM Project I can connect to my CRM Instance and create new WebResources which deploy to the CRM instance just fine, but how can I pull all the existing items that are in the CRM Solution into the Visual Studio CRM project?
     Or do I need to export the solution to a ZIP, expand it with SolutionPackager.exe, then copy these into my Visual Studio project to get it into sync?
    - When multiple developers are working on changes is it best to keep everything in a Visual Studio project as I mentioned above, or is it better for everyone to have their own instance of CRM to code with so they can Export/Import solutions as needed then
    these solutions be manually merged before moving into a common Test/QA environment?
    - When modifying the submenu on a CRM form is it suggested to use Ribbon Workbench or is it better/easier to just export the solution, expand it with SolutionPackager.exe,  modify ribbondiff and anything else required for the change, package it
    back up, then reimport to CRM?  I've heard from some that Ribbon Workbench has some limitations, but being green I wasn't sure what those limitations might be or if it'd be best to just manually make these changes.  Or is thre any way to have a copy
    of ribbondiff in Visual Studio and deploy this without having to repackage the Solution and Import in the ZIP?
    I think that's it for now :)  Thanks for any advise or suggestions.  I really want to start learning the in's and out's of CRM and how all the pieces fit together.  Also can someone direct me to some documentation or books that might give
    more insight on developing for CRM 2013 or 2015 (moving to this soon)?
    Thanks for your time.

    Hi Sam
    Also interested in best practice around this area - especially recommended development routes, unit testing, continuous integration etc - it would be great if you posted here if you find any good articles etc. At the moment we tend to just push changes
    onto a live system as and when appropriate and I'd prefer to move away from that...
    Thanks
    Stuart

  • Any Best Practices for developing custom ABAP reports for Portal?

    Hello,
    The developers on our project are debating the best way to develop custom reports and make them available on the portal.  Of these options that we can think of, can you give any pros & cons, or experiences, or other options?
    - Web-enabled Abap report programs
    - WebDynpro for Abap
    - WebDynpro for Abap using ALV
    - Adobe forms
    Does a "Best Practices" document or blog exist on this topic?
    Thanks,
    Colleen

    Re: Using p_trace=YES

  • Best practice for development using REST API - OData

    Hi All, I am new to REST. I am a developer who works mostly in server-side code using Visual Studio. Now that Microsoft is advocating to write code using REST API instead of server-side code or client side object model, I am trying to use REST API.
    I googled and most of the example shows to write a code and put it on Content Editor/Script Editor. How to organize code and deploy to the staging/production in this scenario? Is there any Best Practice or example around this?
    Regards,
    Khushi

    If you are writing code in aspx or cs it does not mean that you need to deploy it in the SharePoint server, it could be any other application running from your remote server. What I mean it you can use C# & Rest API to connect to SharePoint server.
    REST API in SharePoint 2013 provides the developers with a simple standardized method of retrieving information from SharePoint and it can be used from any technology that is capable of sending standard HTTP requests.
    Refer to the following blog that provide your more details about comparison of the major features of these programming choices/
    http://msdn.microsoft.com/en-us/library/jj164060.aspx#RESTODataA
    http://dlr2008.wordpress.com/2013/10/31/sharepoint-2013-rest-api-the-c-connection-part-1-using-system-net-http-httpclient/
    Hope this helps
    --Cheers

  • Best practice for developing and testing AIR iOS multitouch apps

    I'm considering switching our iPhone development practice from XCode/Objective C to AIR to leverage a larger pool of in house talent. The biggest concern I have right now is testing multitouch functionality - the prospect of having to do incremental builds, provisioning and deployment just to test and tweak the multitouch aspects of our projects is daunting, if not downright depressing.
    What tools / services or techniques have been developed to facilitate what is undoubtedly one of the most common issues with developing for multitouch iOS using a desktop-based AIR development environment?

    @tomaugerdotcom
    Something like this might help: https://testflightapp.com/
    Concevably, you could roll your own internal service if that particular one doesn't suit you. (I don't have any knowledge about how they are doing it, but it shouldn't be hard to figure out since Apple's constraining rules would only allow a few possibilities.)
    USB app install and debugging isn't supported on iOS. You have to use wireless.
    Another option specifically for multi-touch dev/testing, is to use an Android device.

  • What is the best practice for developing web service?

    Hi All,
    I'm a newbee to web services...
    I was wondering what would be the best approach in developing a web service,
    using tools or programmatic approach?
    If I use WebLogic Workshop, am I tied to a vendor?
    Is it possible for me to develop web services using workshop and deploy in
    another app server..?
    I would appreciate if somebody could give me a pointer to start.
    I have already referred BEA's docs.
    I'm still confused on a good starting point on the best approach to develop
    protable web services.
    Thanks in advance for any inputs.
    K K

    K K-
    You have a very valid point on the simplify or complicate matters. If you are
    going for clean and not-so-time-centric code, then there are several different
    programs and packages out there you can choose from.
    Since you are specialized in J2EE, than the Sun package may be what you are looking
    for. BEA's classes simplify much of the work you will be doing, but you could
    emulate their classes or extend yours above the functions provided in theirs.
    It all boils down to how much work are you willing to do.
    If you are asking for more detailed, coding 'Design Patterns' to utilize, I would
    wait for a few more posts from other folks as my work often requires me to utilize
    the tools provided.
    Sincerely,
    Eric Ballou
    "K K" <[email protected]> wrote:
    Eric,
    Thanks for the response.
    I was also looking at Sun's WSDP 1.1, which is more programmatic approach.
    Some how, I feel being a J2EE developer, I should go on the direction
    of the
    programmatic approach.
    Using the tools could simplify or complicate things. Also, the Workshop
    samples import all weblogic specific packages.
    My code looks so dirty with many vendor specific packages being imported.
    Could you give me your suggestions for a clean and neat approach?
    I would personally prefer to avoid the quick and dirty approach.
    Thanks again.
    "Eric Ballou" <[email protected]> wrote in message
    news:[email protected]...
    K K-
    The best approach in developing portable web services is knowing whatyou
    are
    planning on using them for as well as how much is willing to be spent,etc.
    BEA's Workshop is portable to other frameworks, but the ease ofintegrating a
    developed client or a developed server can very greatly. Even moreof an
    issue
    is migration from one framework to another. If you choose to developin
    Workshop
    and your company later deploys .Net solutions, some of your work mayhave
    to be
    redone unless the company is willing to keep portions of the 'old'system
    around
    until new versions of the service are available. However, Workshophas
    several
    ant tools available that would assist you in deploying to other appservers or
    even a stand-alone application should you need cross framework abilities.
    If you are just starting out in web services, http://www.webservices.org
    is a
    good place to start checking out vendors in the space.
    Sincerely,
    Eric Ballou
    "K K" <[email protected]> wrote:
    Hi All,
    I'm a newbee to web services...
    I was wondering what would be the best approach in developing a web
    service,
    using tools or programmatic approach?
    If I use WebLogic Workshop, am I tied to a vendor?
    Is it possible for me to develop web services using workshop and deploy
    in
    another app server..?
    I would appreciate if somebody could give me a pointer to start.
    I have already referred BEA's docs.
    I'm still confused on a good starting point on the best approach todevelop
    protable web services.
    Thanks in advance for any inputs.
    K K

  • Best practice for developing multi language Website

    Hi all
    I want to develop my website in multiple languages and I know that I can put all the msg string in the resource bundle or in the database, however, I think this makes the interface of the web becomes very difficult to develop because I can't see anything in the HTML editor. Another solution is to use XSL that I believe HTML editors is able to display the tagname or some description of the XSL tag but I am not sure about that because I haven't used it before.
    Have any expert web developers find a better solution or do you guys think XSL is the best solutions? Any suggestions are very welcome. Thank You!
    From
    Edmond

    being not familiar with xsl, I say go for resources. If you define your own tag to display text, it isnt that hard to understand, e.g.
    <translate id="hello.world"/>. That at least is how I make multi language website.

  • Best practice for Development - any advice?

    We are fairly new to WebLogic/Java development. We are using 6.1. We are coding
    an application, which uses EJBs, JSPs, etc following the M-V-C
    model. We have about 10 developers working on various bits of
    the system. Some have WebLogic/Visual Cafe installed on PCs and
    can work happily without affecting anyone else. Others who do
    not (yet) have powerful-enough PCs to run all this are trying
    to develop and deploy their code on the unix box (HP_UX).
    To enable each developer to work independantly, we have set up
    a new domain for each developer. The idea for this is so that
    each developer can re-deploy his .ear file in his own domain
    without affecting others. We seem to be having lots of problems
    with the consoles seeming to get confused and blowing up with
    memory errors and all sorts of nasty things.
    Is anyone else using a similar way of working?
    Is what we are doing stupid?
    Is there a better way of achieving the separation of developers?
    Any advice/thoughts would be gratefully received.

    We have several developers - each has their own development domain on the
    same server, but we're not experiencing console problems as you described.
    Are you sure they have their own domain instance of WLS?
    This development environment works very well for us. Like you indicated,
    each developer can deploy/redeploy without affecting others.
    Hope this helps...
    Rob
    "Paul Evans" <[email protected]> wrote in message
    news:3be191fd$[email protected]..
    >
    We are fairly new to WebLogic/Java development. We are using 6.1. We arecoding
    an application, which uses EJBs, JSPs, etc following the M-V-C
    model. We have about 10 developers working on various bits of
    the system. Some have WebLogic/Visual Cafe installed on PCs and
    can work happily without affecting anyone else. Others who do
    not (yet) have powerful-enough PCs to run all this are trying
    to develop and deploy their code on the unix box (HP_UX).
    To enable each developer to work independantly, we have set up
    a new domain for each developer. The idea for this is so that
    each developer can re-deploy his .ear file in his own domain
    without affecting others. We seem to be having lots of problems
    with the consoles seeming to get confused and blowing up with
    memory errors and all sorts of nasty things.
    Is anyone else using a similar way of working?
    Is what we are doing stupid?
    Is there a better way of achieving the separation of developers?
    Any advice/thoughts would be gratefully received.

  • Best Practice for Developer update access to database in Production

    I am curious to find out what other organizations are doing as to developer access to sysadm in production. Such as using a database account created like sysadm that can be checked out for use and locked when not in use? or ?

    Developer can be provided with Read only access to SYSADM schema.
    Thanks
    Soundappan
    Edited by: Soundappan on Dec 26, 2011 12:00 PM

  • Best practice for external but secure access to internal data?

    We need external customers/vendors/partners to access some of our company data (view/add/edit).  It’s not so easy as to segment out those databases/tables/records from other existing (and put separate database(s) in the DMZ where our server is).  Our
    current solution is to have a 1433 hole from web server into our database server.  The user credentials are not in any sort of web.config but rather compiled in our DLLs, and that SQL login has read/write access to a very limited number of databases.
    Our security group says this is still not secure, but how else are we to do it?  Even if a web service, there still has to be a hole in somewhere.  Any standard best practice for this?
    Thanks.

    Security is mainly about mitigation rather than 100% secure, "We have unknown unknowns". The component needs to talk to SQL Server. You could continue to use http to talk to SQL Server, perhaps even get SOAP Transactions working but personally
    I'd have more worries about using such a 'less trodden' path since that is exactly the areas where more security problems are discovered. I don't know about your specific design issues so there might be even more ways to mitigate the risk but in general you're
    using a DMZ as a decent way to mitigate risk. I would recommend asking your security team what they'd deem acceptable.
    http://pauliom.wordpress.com

  • Best Practice for report output of CRM Notes field data

    My company has a requirement to produce a report with variable output, based upon a keyword search of our CRM Request Notes data.  Example:  The business wants a report return of all Service Requests where the Notes field contains the word "pay" or "payee" or "payment".  As part of the report output, the business wants to freely select the output fields meant to accompany the notes data.  Can anyone please advise to SAP's Best Practice for meeting a report requirement such as this.  Is a custom ABAP application built?  Does data get moved to BW for Reporting (how are notes handles)?  Is data moved to separate system?

    Hi David,
    I would leave your query
    "Am I doing something wrong and did I miss something that would prevent this problem?"
    to the experts/ gurus out here on this forum.
    From my end, you can follow
    TOP 10 EXCEL TIPS FOR SUCCESS
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/204c3259-edb2-2b10-4a84-a754c9e1aea8
    Please follow the Xcelsius Best Practices at
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/a084a11c-6564-2b10-79ac-cc1eb3f017ac
    In order to reduce the size of xlf and swf files follow
    http://myxcelsius.com/2009/03/18/reduce-the-size-of-your-xlf-and-swf-files/
    Hope this helps to certain extent.
    Regards
    Nikhil

  • Best Practice for disparately sized data

    2 questions in about 20 minutes!
    We have a cache, which holds approx 80K objects, which expired after 24 hours. It's a rolling population, so the number of objects is fairly static. We're over a 64 node cluster, high units set, giving ample space. But.....the data has a wide size range, from a few bytes, to 30Mb, and everywhere in between. This causes some very hot nodes.
    Is there a best practice for handling a wide range of object size in a single cache, or can we do anything on input to spread the load more evenly?
    Or does none of this make any sense at all?
    Cheers
    A

    Angel 1058 wrote:
    2 questions in about 20 minutes!
    We have a cache, which holds approx 80K objects, which expired after 24 hours. It's a rolling population, so the number of objects is fairly static. We're over a 64 node cluster, high units set, giving ample space. But.....the data has a wide size range, from a few bytes, to 30Mb, and everywhere in between. This causes some very hot nodes.
    Is there a best practice for handling a wide range of object size in a single cache, or can we do anything on input to spread the load more evenly?
    Or does none of this make any sense at all?
    Cheers
    AHi A,
    It depends... if there is a relationship between keys and sizes, e.g. if this or that part of the key means that the size of the value will be big, then you can implement a key partitioning strategy possibly together with keyassociation on the key in a way that it will evenly spread the large entries across the partitions (and have enough partitions).
    Unfortunately you would likely not get a totally even distribution across nodes because of having fairly small amount of entries compared to the square of the number of nodes (btw, which version of Coherence are you using?)...
    Best regards,
    Robert

  • Best practices for submitting CF data to an AJAX page?

    Hi everyone,
    I've got a project I'm working on for work and have hit a
    little problem.
    I am extracting data from my database and then after each
    piece of data (just numbers, usually 10 chunks of numbers), I tack
    a "|" onto the end of each number. Then, I output the data to the
    page. Back on my AJAX enabled page, I get the "responseText" from
    that page, and then split it up using javascript and the
    pre-inserted "|".
    This seems to work fine, but it is quite a bit more messy.
    Also, It would really, really be nice to be able to do sorting and
    various other operations on the data with javascript instead of
    having to rely on CF's icky code logic.
    Can someone please enlighten me as to best practices for this
    type of thing? I get the suspicion that I'll probably be using XML
    somehow, but I'd your opinion.
    Thanks!

    Check out the Samples and Documentation portions of Adobe's
    Spry website for client side use of JSON with Spry.
    http://labs.adobe.com/technologies/spry/home.html
    Here is link to Adobe's Spry Forums:
    http://www.adobe.com/cfusion/webforums/forum/categories.cfm?forumid=72&catid=602
    If you are using CF8 you can use the SerializeJSON function
    to convert a variable to JSON. You might also be interested in the
    cfsprydataset tag. CF 8 documentation:
    http://livedocs.adobe.com/coldfusion/8/htmldocs/
    If you are using a previous version of CF there is 3rd party
    JSON support. You can find links at
    http://json.org.

  • ASM BEST PRACTICES FOR 'DATA' DISKGROUP(S)

    In our quest to reduce operating costs we are consolidating databases and eliminating RAC in favor of standalone servers. This is a business decision that is a certainty.  Our SAN has been upgraded, and the new database servers are newer, faster, etc.
    Our database version is 11.2.0.4 with Grid Infrastructure 12.1.0.1. Our data diskgroup is RAID-5 and our fra is RAID-1+0.  ASM has external redundancy.  All disks are of equal size with equal storage performance and availability.
    Previously our databases were on separate clusters by function: OLTP, REPORTING and ENTERPRISE CONTENT MANAGEMENT. Development/Acceptance shared a cluster, while production was separate.
    The new architecture combines different functions onto one server for dev/acc, and another for production.  This means they will all be using the same ASM instance.  Typically we followed Oracle’s recommendation to have two disk groups, one for data and the other for FRA.  That followed well when the database was the only one using the data diskgroup.  Now that we are coming databases, is the best practice still to have one data diskgroup and one FRA diskgroup?  For example, production will house 3 databases.  OLTP is 500 GB, Reporting is 1.3 TB, and Enterprise Content Management is 6 TB and growing.
    My consideration is that if all 3 databases accessing the same data diskgroup, the smaller OLTP must traverse through the 6 TB of content management.  Or is this thinking flawed?
    Does this warrant separate diskgroups?  Are there pros and cons to this?
    Any insights are appreciated.
    Best Regards,
    Sherrie

    I have many issues to deal with in this 'consolidation', but budget reduction is happening in state and regional government.  Our SAN storage is for our enterprise infrastructure and not part of my money-savings directive.  We are also migrating to UCS blades for the infrastructure, also not part of my budget reduction contribution. Oracle licensing is our biggest software cost, this is where my directive lies.  We've always been conservative and done more with less, now we will do with less, but different because the storage and hardware are awesome. 
    We've been consolidating databases onto RAC clusters and standalones since we started doing Oracle.  For the last 7 years we've supported ASM, 6 databases and 2 passive standby instances (with Data Guard) on a 2-node cluster totalling 64gb of memory.  The new UCS blades have 256gb of memory.  I get that each database must support its background processes.  If I add up the sga, pga allocated, background processes they take up about 130gb of memory, but also consider that there is an overhead to RAC.  In all the years we've had Oracle, most of our failures, outages or downtime was because of RAC.  On the plus side of that, the seamless failover saved us most times (not all times), but required administrative time for troubleshooting.
    I would love to go the Oracle 12c and use its multitenant architecture, but I have 3rd party applications that don't yet support it.  11.2 might be our last release unless I can reduce costs.  Consolidation is real and much needed, I believe why Oracle responded to the market with multitenancy. 
    But back to my first question about how many diskgroups to service a group of databases.  What I hearing, and think I agree to, is that one data group will suffice because the ASM instance knows where to retrieve the data and waste will be reduced, as well as management. 
    I still need to do some ciphering and by no means have a final plan, but thank you all for your insights and contributions.

Maybe you are looking for