Is source version control inluded already?

I've read posts where some guys announced including source version control into SQL Developer. Is it done already?
TIA,
kazelot

We have not yet included this in SQL Developer, but our intention is to do this in our next major release. While we do not yet have a detailed Statement of Direction available, there is a document you can review on OTN.
<p>SQL Developer does offer the ability to add external programs and some users are using this to connect to their source code control systems.
<p>Sue

Similar Messages

  • Source version control integration with siebel

    Hi All,
    I am trying to integrate siebel tools with third party version control i.e svn
    I have passed the required paramters in srcctrl bat & set the option in tools accordingly.
    Checout/checkin happening fine.But files not getting created in SVN repository
    Does any one has working srcctl bat file for the same
    Appreciate all your help

    Nothing much yet, but I found a website that has the batch file for integration with SVN.
    Guess I have to contact the author to get more details:
    http://arwaheem.wordpress.com/2008/04/30/siebel-version-control-source-code-integration-using-sub-version/

  • Version control for databases, schemas, objects

    Dear All,
    I'm looking for a designer tool with version control abilities. I don't need to have many types of models, if it's able to do ER and version control, plus it has a command line interface then it's fine. (I need to automate everything, so installing schemas with one click or with one command shouldn't be a problem.) The funny thing is that I've already built such environments with SVN and VSS, but now I need a reliable product with such features. (I don't like Designer, so that one is out of scope.) One more thing: it has to be able to store parameters of objects for example: PCTFREE, PCTUSED, TABLESPACE, etc.
    I'm looking forwrad to your help.
    Franky

    Released in April 2008, Oracle SQL Developer 1.5 is the "Version Control" release, as it includes integration with open source version control products, CVS and Subversion.Supporting the version control is a File Browser to browse and read files stored in the file system. You can open and edit these files from within SQL Developer.
    http://www.oracle.com/technology/products/database/sql_developer/files/what_is_sqldev15.html

  • "version control" for Oracle database?

    Hi,
    My work involves loading data from csv files into database tables. The data structure is different in the csv files than that in the tables, so the loading is not straight forward and I often make mistakes along the way. I would like to know what the best practice is for undoing mistakes and rolling back to meaningful point back in time. To make this more concrete, consider the following scenario.
    10:00AM I start loading some data into the database. I create two external tables for my csv files.
    10:30AM I create a PL/SQL script to insert the data from the external tables to the target tables.
    10:35AM I run the PL/SQL script and commit the change.
    11:00AM I notice a bug in my script: some of the data is loaded incorrectly, and some are not loaded.
    11:15AM I fix the bug, try to run it again but it fails this time because of unique constraints.
    At this point, I want my database to go back in time to 10:00AM, so I can start over. How can I do this?
    12:00PM Suppose I manage to start over and successfully loaded the two csv files. I still have more files to load. Before I proceed, I want to somehow "tag" the database so that I can go back to this state later (say two weeks from now, and the rollback segment isn't large enough to go back two weeks).
    Currently I use data pump export/import to undo mistakes on my development server. Due to the size of the database, it is not as efficient as I would like. I am from a Java developer background. The scenario sounds a lot like source version control to me. Is there such a thing in database land? What's the best practice for doing rapid try-error-rollback cycles?

    Is the data in the external tables sorted by some attribute? Consider keeping a small metadata table indicating the last successful key of the attribute that was committed. Then, after the commit, set a Savepoint (use the attribute key value for the savepoint name) and continue execution. If you find an error before your next commit, you can rollback to that savepoint and not lose all of the updates prior to it, but remember that a subsequent commit erases all savepoints you have set. Flashback of the table(s) is also a good idea. You can get the current db commit no. by executing 'Select current_scn from v$database' (you may need privs to read this table from the dba), and then executing 'Flashback table <table_name> to scn <scn_no>'. You can also use a Timestamp in place of scn_no with the Flashback command.

  • Database source code control or version management

    Hi all,
    I work in a data warehouse development project, where database schema changes form majority of development work. As a development DBA I look after ensuring that all the database schema changes are version controlled properly.
    We currently use CVS as a source code control system. We can use CVS well enough where stored procedures, functions and packages are involved. But when it comes to table definitions, we are finding use of CVS bothersome.
    Hence I would like to know, which tool are you using for version control of schema changes. Any links to best practices on DB version control would be much appreciated.

    I think Oracle have introduced something in 11g for version control. Also you can use third party Software like ERWIN. You can also explore Oracle Data Modelling software which is Beta currently.
    Regards

  • Source Configuration Management / Version Control

    I was wondering what the Forte raving masses out there are doing about Source
    Configuration Management and Version Control type of issues?
    Have you been able to implement or "skunk work" a packaged product with your
    Forte development environment?
    Our shop consists of WindowsNT Forte developers coding for predominantly
    Windows95 clients and a HP UNIX Central Server. At this time we currently use a
    home grown "system" to handle Source Configuration Management and Version
    Control issues. We are now looking to see if there is a better way to do this.
    We've identified several Industry Standard packages (SCCS, CVS, Microsoft
    SourceSafe etc.) and still haven't found anything very useful.
    What I am seeing is that all of the packages so far have direct hooks in C++,
    Visual Basic etc.
    I have yet to see something with Forte hooks.
    Kelsey PetrychynSaskTel Forte Technical Analyst
    ITM - Technology Solutions - Distributed Computing (OTC)
    Tel (306) 777 - 4906, Fax (306) 359 - 0857
    Internet:[email protected]
    Quality is not job 1. It is the only job!
    To unsubscribe, email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>

    Kindly specify the email address to apply to for the mentioned job

  • How to use Source Code Control for Large Application?

    Hi, All!
    I would like to collect knowledge about "best practice" examples for using Source Code Control and project organization for relative large application (let's say approx 1000 SubVIs).
    Tools used:
    LabVIEW 8.0
    CVS Server
    PushOK CVS Proxy Client
    WinCVS
    With LabVIEW 8 we can organize large project pretty well. This described in article Managing Large Applications with the LabVIEW Project.
    I have read this article too: Using Source Control Software with LabVIEW In this Article Source Safe used, but with PushOK all looks nearby the same and works (some tricks for compare function are required).
    Example. Two developers working together on same project. Internally project is modular, so one developer will work with module "Analysis", and another one with "Configuration" without interferences. These modules placed into Subfolders as shown in example above.
    Scenario 1:
    Developer A started with modification of module "Analysis". Some files checked out. He would like to add some SubVIs here. So, he must also perform check out for the project file (*.lvproj), otherwise he cannot add anything into project structure.
    Developer B at the same time would like to add some new functions into module "Configuration". He also needed to check out project file, but this file already checked out by Developer A (and locked). So, he must wait until lvproj file will be checked in. Another way is mark *.lvproj files as text files in PushOK, but then one of developers will get conflict message by checking in and then merging will be necessary. This situation will coming very often, because in most cases *.lvproj file will be checked out all the time.
    Question: Which practice is better for such situation? Is Libraries better than folder for large project?
    Scenario 2:
    Developer C joined to the team. First, he must get complete project code for starting (or may be at least code of one Library, which assigned to him).
    Question: How it can be done within LabVIEW IDE? Or WinCVS (or other SCC UI) should be used for initial checkout?
    Scenario 3:
    Developer D is responcible for Build. Developers A,B,C have added lot of files into modules "Analysis", Configuration" and "FileIO". For building he need to get complete code. If our project splitted into folders, he should get latest *.lvproj first, then newly added SubVIs will appear in Project Explorer, then he should expand tree, select all SubVIs and get latest versions for all. If Project organized in Libraries, he must do the same for each library, isn't?.
    Question: Is this "normal way", or WinCVS should be used for this way? In WinCVS its possible with two mouseclicks, but I prefer to get all code from CVS within LabVIEW IDE recursively...
    That was a long post... So, if you already working with LabVIEW 8 with SCC used for large project, please post your knowledge here about project structure (Folders or Libraries) and best practices, its may be helpful and useful for all of us. Any examples/use cases/links etc are appreciated.
    Thank you,
    Andrey

    Regarding your scenarios:
    1. Using your example, let's say both developers checked out version 3
    of the project file. Assuming that there are only files under the
    directories in the example project, when Developer A checks in his
    version of the project, there will be new files in one section of the
    project separate from where Developer B is working. Developer B,
    notices that there is now a version 4 of the project. He needs to
    resolve the changes so will need to merge his changes to the latest
    version of project file. Since the project file is a text file, that is
    easy to do. Where an issue arrises is that after Developer B checks in
    his merged changes, there is a revision 5. When Developer A and B go to
    make another change, they get the latest version which will have the
    merged changes to the project file but not the referenced files from
    both Developer A and B. So when A opens version 5, he sees that he is
    missing the files that B checked in and visa versa. Here is where the
    developers will needs to manually use the source control client and,
    external to LabVIEW, get those new files.
    Where libraries help with the above scenario is that the library is a
    separate file from the project so changes made to it outside of the
    project do not require the project to be modified. So this time, the
    developers are using a single project again which time time references
    two libraries. The developers check out the libraries, make changes to
    the libraries, and then check those changes in. So when each developer
    opens the project file, since it references the project file, the
    changes to the library will be reflected. There is still the issue of
    the new files not automatically coming down when the latest version of
    the library is obtained. Again, the developers will needs to manually
    use the source control client and, external to LabVIEW, get those new
    files. In general, you should take advantage of the the modularity that
    libraries provide.
    2. As noted in the above scenario, there is no intrinsic mechanism to
    get all files referenced by a LabVIEW project. Files that are missing
    will be noted. The developer will then have to use the source control
    provider's IDE to get the initial contents of the project  (or library).
    3. See above scenarios.
    George M
    National Instruments

  • OWB Process Flow - How is the best  version control tool ??

    HI all,
    I just start work with OWB and I have a question to know how is the best way to do something.
    Imagine the scenario below:
    If I have 2 or more requests for example:
    Request 1: Create a Dimension City.
    Request 2: Create a Dimension Products.
    I Have ONE process flow and i need put my changes inside. This is my problem.
    In my scenario I don't know what Request goes to Prod First.
    If I put the Request 1 and Request 2 in my PROCESS FLOW, maybe I need change is someone decide change MY REQUEST PRIORITY.
    There is something in OWB to "control the version or changes" ?? For a mapping I export the MDL and commit on SVN, but I dont know haw can i do with the process flow.
    Something to agree multiples peoples work in different mappings and a SAME PROCESS FLOW ??
    What is best way to work with process flow and version control.
    What are the best practices when it comes to version control?
    Thanks.

    Amit,
    Are you really doing this in 10.1.3.x and not 11g?
    At any rate, I don't see how #2 and #3 relate whatsoever to your choice of a version control system. OK, maybe in #2 if there is some "maintenance" activity to be done against the version control server. Subversion is the open source alternative that you listed there and is pretty commonly used. If your company is already using one of the mentioned tools, why change? About the only thing I'd mention is to advise you NOT to use CVS for well documented reasons (JDev does support it) - if you would have picked CVS otherwise, choose Subversion. As far as question #1 - I've only used Subversion (well, I did use CVS for a while) with JDeveloper, so I can say it was "effective enough for me." In 10.1.3.x, I also used the external svn tools for doing lots of things like merging and so forth; in 11g, the support is much much better.
    Best,
    John

  • RH9 Version Control not in File menu

    Hi all,
    I tried to add my project to Version Control but I don't see the Version Control command on the File menu. Although the Version contol toolbar is already on my workspace but it is disabled.
    Please help me.
    Thank you.

    Goldfish,
    If you are using TFS as source control, here's the topic that helped me find the answer: http://adobe.hosted.jivesoftware.com/message/5076624#5076624

  • Writer can't see projects in Robo Version Control - no error messages

    Background:
    Robo Version Control 3.1
    RoboHelp 8 (same issue happened with 7)
    Issue:
    We are having trouble with our source control, and I wasn't able to find an answer on this forum or on Google.
    From time to time, when a writer opens RoboSource Control Explorer, they can't see any projects. They're not getting error messages, they seem to connect to the DB, but the left pane is simply empty (they don't see the root folder or anything else). We recently upgraded to RoboHelp 8, but this has happened on RoboHelp 7 too. Source control does work - for example, if they open their local version of a version controlled project, they can get the latest version of files, check in, check out and so on. They simply can't use RSC Explorer, so they can't Get projects they don't have on their HDDs already.
    In the meantime, Source Control works just fine for the rest of the team, so I'm guessing it's a problem with some local settings. (The writer who is having trouble now can't get it working on her machine with her personal user or the admin user. At the same time, I can use the admin user just fine from my machine.)
    This problem always solved itself after a while, but it's getting a bit annoying and I'm hoping someone knows why this is happening.

    In general, manually moving files around inside the store like this should be discouraged.
    reconstruct is telling you something is wrong:
    ERROR: Inconsistent information: 0 idx records   41 messages   0 expunged
    Reconstructing...
    cannot retrieve message uid 590
    cannot retrieve message uid 807
    cannot retrieve message uid 911
    cannot retrieve message uid 915
    cannot retrieve message uid 1198
    cannot retrieve message uid 1286The first line is to be expected based on what you have done. The store.idx indicates there are no messages in the folder. But it found 41 .msg files. So it is going to rebuild store.idx to fix that.
    But then there is some problem with accessing the .msg files. Possibilities would include:
    - wrong ownership/permissions?
    - the .msg file being in the wrong NN subdir ??
    If those guesses do not lead to anything, try truss on the reconstruct command to see what happens when reconstruct tries to open those files.

  • RoboHelp 11 .mdb files unhandled in SharePoint version control

    Since RoboHelp 11, .mdb files don't seem to be recognized with SharePoint version control. Note that we have ensured the file type is allowed in SharePoint.
    Lauching RoboHelp in Admin mode does not help.
    Note that this was working under RoboHelp 10.

    Hi, goguenr
    Because RoboHelp 11 has some enhancements and a new Topic-Sharing workflow (that can be "cloud-based") there have been some modifications. So perhaps it requires a re-connection to SharePoint. Normally, the RoboHelp/Sharepoint integration is most commonly used for source control of those files related to a specific RoboHelp Project. RoboHelp will exclude any "output" files (like those in the !SSL! folder because after all they are output files and not "source" files as source control implies. However, you can modify how this works with some configuration.  If you also link to files (which are managed by "Baggage" in the Project Manager); those are added to Source Control as well. (But it sounds like you were already doing that with previous versions, so sorry if I'm stating the obvious.)
    I think if would be great if you could take a look at a webinar that Willam van Weelden and I did last Fall. A recording is available for viewing and is free. You can also learn more on Willam's website:
    http://www.wvanweelden.eu/blog/2013/08/17/adobe-robohelp-using-sharepoint-version-control
    The recording is here (you may be asked to sign in with free Adobe ID)
    http://www.adobe.com/cfusion/event/index.cfm?event=set_registered&id=2099687&loc=en_us
    Willam is more versed in this than I, so perhaps he will chime in with more.
    Thanks
    John Daigle
    Adobe Certified RoboHelp and Captivate Instructor
    Evergreen, Colorado
    www.showmethedemo.com

  • Extremely slow performance on projects under version control using RoboHelp 11, PushOk, Tortoise SVN repository

    We are also experiencing extremely slow performance for RoboHelp projects under version control. We are using RoboHelp 11, PushOk and a Tortoise SVN repository on a Linux server. We are using a Linux server on our IT guys advice because we found SVN version control under Windows was unstable.
    When placing a Robohelp project under version control, and yes the project is on my local machine, it can take up to two hours to complete. We are using the RoboHelp sample projects to test.
    We have tried to put the project under version control from Robohelp, and also tried first putting the project under version control from Tortoise SVN, and then trying to open the project from version control in Robohelp. In both cases, the project takes a ridiculous amount of time to open. The Robohelp status bar displays Querying Version Control Status for about an hour before it starts to download from the repository, which then takes more than an hour to complete. In many cases Robohelp becomes unresponsive and we have to start the whole process again.
    If adding the project to source control completes successfully, and the the project is opened from version control, performing any function also takes a very long time, such as creating a topic. When I generated a printed documentation layout it took an astonishing 218 minutes and 17 seconds to complete. Interestingly, when I generated the printed documentation layout again, it took 1 min and 34 seconds. However when I closed the project, opened from version control, and tried to generate a printed documentation layout, it again took several hours to complete. The IT guys are at a loss, and say it is not a network issue and I am starting to agree that this is a RoboHelp issue.
    I see there are a few other discussions here related to this kind of poor performance, none of which seem to been answered satisfactorily. For example:
    Why does it take so long when adding a new topic in RH10 with PushOK SVN
    Does anybody have any ideas on what we can do or what we can investigate? I know that there are options for version control, but am reluctant to pursue them until I am satisfied that our current issues cannot be resolved.
    Thanks Mark

    Do other applications work fine with the source control repository? The reason I'm asking is because you must first rule out that there are external factors causing this behaviour. It seems that your it guys have already looked at it, but it's better to be safe than sorry.
    I have used both VSS and TFS and I haven't encountered such a performance issue. I would suggest filing it as a bug if you rule out that the problem is not related to external influences: https://www.adobe.com/cfusion/mmform/index.cfm?name=wishform&loc=en
    Kind regards,
    Willam

  • Version Control of APEX Pages and Shared Components

    Background:
    My organisation has a large customer base and over the last 2 years we have migrating from a forms to an apex user presentation layer. We have had a number of customers live on the apex front end for close to a year now.
    Our current method of releasing apex objects is at the application level (ie applications are exported for version control in PVCS and then released to Test etc). We now want to investigate exporting pages and shared components individually. Hence, I have a few questions:
    1. If I export a page and this is checked into PVCS and I forget to export a 'List of Values' shared component. What happens when the page in PVCS is created in another environment (ie Test). I guess the ‘Page Import’ would still succeed but the reference to the ‘List of Values’ would be some large made up number.
    How would we detect the missing dependency after import ?
    2. Regarding New or Changed Templates. Once again, if a page references a new template and is then exported, checked into PVCS and imported into test but the template is missed for migration to test, would the import succeed but the template reference would be broken, like in number 1.
    3. How can Application level objects be locked (reserved) when undergoing modification.
    Any comments would be appreciated especially if there are any sites using pages and shared component exports for version control and releases.
    For anyone who's interested, the method we are thinking of using is:
    ..Page Export script will be version controlled
    ..ALL the shared component export scripts will be added to 1 main SQL script
    Hence we only end up with 2 configurable objects in PVCS.

    Nigel,
    1. If I export a page and this is checked into PVCS and I forget to export a 'List of Values' shared component. What happens when the page in PVCS is created in another environment (ie Test). I guess the ‘Page Import’ would still succeed but the reference to the ‘List of Values’ would be some large made up number.
    For component export/import, the source and target worskpace ID and application ID must be identical. You can achieve the workspace "sameness" by exporting and importing the workspace from one database to another, thus preserving the workspace's numeric ID, aka security group ID. Similarly applications must be exported/imported/installed without changing their IDs in the installed-into instance. More fundamentally, the application you import/install components into must be an identical copy of the source application with respect to the internal object IDs, allowing only for differences that incent you to migrate changes from a higher rev level of the application into a copy that is at a lower rev level.
    As to the specific question, if you copied a page but didn't copy an LOV into the target application then if the LOV referenced by the page already existed in the target application then page would simply reference the existing, perhaps down-level, LOV in the application. If the LOV did not already exist but had been newly created in the source application, then the target application page would contain an invalid reference and would produce a runtime error.
    How would we detect the missing dependency after import ?
    I don't know of any reports that would tell you this. There are several types of omissions that you need to watch out for, not all of which can be detected by inspection of the target application in isolation.
    2. Regarding New or Changed Templates. Once again, if a page references a new template and is then exported, checked into PVCS and imported into test but the template is missed for migration to test, would the import succeed but the template reference would be broken, like in number 1.
    Yes, same case.
    3. How can Application level objects be locked (reserved) when undergoing modification.
    There is no provision for this as there is for pages.
    For anyone who's interested, the method we are thinking of using is:
    ..Page Export script will be version controlled
    ..ALL the shared component export scripts will be added to 1 main SQL script
    Hence we only end up with 2 configurable objects in PVCS.
    So you propose to have one script of all pages and another script for everything else? I'm not sure I got that right.
    Scott

  • Poll - Which Version Control Software Do You Use With LabVIEW?

    I wish the forums had a poll feature. I created a poll in the developer community - Which Version Control Software Do You Use With LabVIEW?
    http://decibel.ni.com/content/polls/1818
    Edit: I just saw that there already is a poll for that
    http://decibel.ni.com/content/polls/1050
    =====================
    LabVIEW 2012

    julieann wrote:
    You can use a source control provider to share files among multiple users, improve security and quality, and track changes to shared projects. Use LabVIEW with third-party source control providers so you can check out files and track changes from within LabVIEW. See info. here.
    Looks suspiciously like someone trying to increase hits on their blog. A coincidence that it was posted the same day as the blog post?
    Message has been reported to the moderator. Laura can decide whether or not it's appropriate.

  • Why do people use SharePoint for Version Control?

    People have a few options for placing their FrameMaker documentation in version control. One option is to use the SharePoint CMS integration. What I don't understand is why someone would choose to use SharePoint for this. I have seen several points with complaints like "I have everything set up correctly but sharepoint still isn't working correctly for X reason...".    Now there are business reasons why you may have to use SharePoint. For example, the organization you work for is all based on Sharepoint and they demand the use of SharePoint as a business requirement.   However, if you have a choice, why not use subversion? I have been using it for years with my framemaker documentation. There are no configuration steps. Someone sets up the SVN repository and then you add the FrameMaker files. That's it; you are done. After that, SVN just works.  However, the bottom line is that Sharepoint CMS sounds like a nightmare. I can personally attest there are almost no problems with using a SVN repository.  From a technical standpoint, I have no idea what SharePoint could possibly provide that would make it worth the hassle it puts people through to do simple check ins and check outs.    Joe

    In my case, "business reasons" more or less nails it. Our company is implementing SharePoint and they're hoping I can use it for DITA.  They'll entertain other options only if there are good reasons why we can't make SharePoint work for this.  I've already started exploring the SharePoint API. Meanwhile, we have the SharePoint Connector working and we can check files in and out -- it's not that difficult.
    I've heard about Subversion, but I understand that it's mainly a source-control application. I have no shortage of those to choose from; our company already uses MKS and TFS. (In fact I'm using MKS to store one of my DITA projects.) SharePoint has an edge over them because it allows me to associate custom metadata with a library (say "topic type" or "audience"), complete with a list of fixed values like "concept" and "task" for our authors to choose from. I'm not sure if Subversion offers similar functionality.
    Where all of them fall down is in the area Nakshatra mentions -- dependency management. If I want to rename a file, or replace a Windows7 screenshot with a Windows8 screenshot that has a different file name, or I want to know everywhere a conref is used, or want an alert when someone changes the conref, I need an underlying database to make the file management system "DITA-aware." 
    I was all set to create such a database for our SharePoint implementation, along with a user interface -- very gradually, in small steps over a long time. FM's "FMDependency" field presents an unexpected complication for this plan, and I'm still absorbing that. 
    If Subversion is "DITA-aware" or has promising open-source plugins to make it so, I'm interested. Otherwise I still have to develop a database and UI, and in my case, I might as well try to do it with SharePoint. 

Maybe you are looking for

  • Delivery schedule line number in sales order is not populated in production

    Hi all, i am doing make to order scenario with 20 strategy group. In sales order we r defining the different delivery schedule lines.when i run the MRP and getting the plan orders for all the FG material and semifinished material, the sale order numb

  • The file path and directory about the .exe file created in LabView, who knows?

    I have a project, in the project, VIs and documents(.doc,.txt,.tdms,etc.) in different directories, and when the project run in the labView, it can find the directories and files, but when I created .exe file, I found the directories had changed and

  • MAX crashes when scanning for gpib instruments

    MAX closes without comment when I attempt to scan for instruments with a tektronix TDS3014B scope attached. This problem does not occur with other instruments. Attachments: Capture.spy ‏60 KB nireport.txt ‏2 KB

  • Saving PDF file into application server

    Dear all, My requirement is retrieving a PDF file from application server and modifying the document and saving it back to the application server through BSP application. I've retrieved the data from the application server and displayed it in the fra

  • EXCEL formatting using WEBDYNPRO ABAP

    Hi Can we format the cells while downloading a file in EXCEL format in webdynpro and also can we insert a logo in the downloaded excel? Can any one help on this ? Thanks in Advance Srini -