CVS ideas?

I'm in the process of developing my own Linux system. Not exactly what I would call a distro as yet, but it is early days.
Ive got the system to the point where it boots and I can login. It is VERY minimal, now I need to (and am about to) start working on my system specific tools / configurations however, before I do I would like to setup a cvs repo to store any and all changes made, along with all the software that I develop for it.
Ideally I would like to be able to make / my working directory, however, this doesn't really seem possible / feasible. Ive asked this question on many Linux boards, {linux,cvs} irc channels and have googled plenty but don't seem to be getting any help.
Has anyone here got ANY theories on how exactly I could setup a cvs so that I can checkout my changes directly into / of a live/working Linux OS? I know this sounds pretty silly considering the project Ive just described myself undertaking but I'm really not all that knowledgeable when it comes to cvs.
How do devs on other small distro's do it? If it helps any, I will probably be doing most of my dev work from within another OS chrooted into my system.

You got a straight answer, just not one you were looking for.
If you are bound and determined to use cvs (gah. dont do it), then you will run into some serious problems. cvs has pretty bad support for binaries, as I recall (been a long time since I used cvs).
I would recommend having a 'build' tree with your build scripts (think arch ABS tree).
Probably a separate tree for any of your custom developed tools, etc.
Then you use the tools, and the build scripts tree, to output the built system.
I think trying to version a full binary system would lead to more problems than versioning the tools and build scripts used to construct the binary system.

Similar Messages

  • IDE gets stupid with CVS

    When using CVS to control files versions of our project (web + EJB) Sun One Studio (v4 upd 1)
    gets stupid. First, getting new version of EJB descriptors files (using WinCVS) causes IDE to generate funny (?) errors like this:
    CMP Mapping Error in bean SmxPrvServices:: This error (The field profileVersion does not have a valid lower bound) should not occur.
    Please contact support and file a bug.  You can restart the IDE to try to recover from this error.Upgrade to version 5 necessary? Thanks for any suggestions,
    Robert

    I am having this problem without CVS and with Sun One Studio (v5 upd 1) so an upgrade wont fix your problems....
    http://swforum.sun.com/jive/thread.jspa?threadID=22257&tstart=0
    If you find a solution I would love to hear about it.....

  • Ideas or help needed for a simple, robust pluggable framework

    Hi all,
    Having written a fairly decent plugin engine, similar in concept to the Eclipse plugin engine, although at a more generic scale, I am looking for any possible ideas for a Java Swing framework that is built around the engine, with the concept of using a framework that is built on mostly plugins. My engine handles, or will soon handle, a number of features to make the engine robust enough, yet still easy enough, to use for just about any purpose.
    The engine is pretty simple, although with a bit more work I feel will be overall a pretty robust and powerful plugin engine. Each plugin is made up of one or more "services". A plugin is a .jar file that contains a plugin-conf.xml config file, the classes that implement the Service interface, and any supporting classes. The "plugin" is really the package of one or more services and supporting classes. The engine will handle the ability to work with expanded dir structures as well, so that the build process doesn't have to create .jar files on every build of a plugin. The engine has built in support to load, unload and reload a plugin at runtime. This helps during development by allowing auto-reload of a plugin service without having to restart the app. The engine has the ability to "watch" URLs in a separate thread (still working on this), and at given intervals if a change occurs to any plugin, that plugin is reloaded. This is configurable on a per plugin basis in the config file.
    Every plugin .jar file gets its own classloader instance. Because of the nature of a framework that may rely heavily on plugins, it will be very common to have plugin dependencies, where a plugin service may rely on one or more other plugin services. The dependencies are configured in the plugin-conf.xml file, and the engine resolves these when the plugin is loaded, automatically. Once all plugins have been loaded, an "init" call is made that then goes and resolves all plugin service dependencies, setting up the behind the scenes work to make sure any service can use any other service it defines to depend on. Another area is plugin versions. There will no doubt be a time when some sort of application may have legacy plugins, but also have newer plugins. For example, an application built on a "core" set of plugins, may eventually update the core plugins with newer versions. The engine allows the "old" plugins to exist and work while new versions of the same plugins may be loaded and working at the same time. This allows older plugins that depend on the old set of core plugins to work, while newer plugins that depend on the new core plugins may work also. Any plugin may depend on one or more services specified by specific versions, or a range of versions.
    Plugin services can define to be created when first loaded, or lazy instantiated. Ideally, an application would opt for lazy instantiation until a plugin is needed. For example, a number of plugins may need to add menu items or buttons that would trigger its service. The plugin does not actually need to be created until the menu or button is clicked on. There is one BIG problem with how this engine works though. Unlike the Eclipse (and other) engines where the config file defines the menu item(s), buttons, etc in an xml sort of language, this engine is built for generic use, and therefore is not specific to menu items or buttons triggering a service instantiation. Therefore, a little "hack" is required. A specific plugin that is created when first loaded will be required to set up all the menu items for specific plugins, then handle the actionPerformed() call to instruct the engine to create the service. The next step would be for the plugin service to add its own handler to the specific menu item it depends on, and remove the "old" handler the startup plugin added to it to handle the initial click. Another thought just struck me though. Because the engine must use an XML parser to load every plugin-conf.xml file, it might be possible to "extend" the parsing routine, where by an extending class could be added to the engine to parse plugin-conf.xml files. First the plugin engines own routine would parse it. Then, the extending class could parse for any extra plugin-conf.xml info, such as menu item settings, and directly set up the menu items and handlers in this manner. I will probably include this ability directly in the engine soon anyway, so that nobody else has to do this, but this is one area I would appreciate some feedback on.
    Anyway, so that is the jist of the engine. There is more to it under the hood, but that sums up a good part of it. Now, the pluggable framework, much like what the "shell" of eclipse, forte and so forth offer, is built around my engine to make it very easy to build Swing applications with a pluggable framework underneath. The idea is to package up a startup main class that is configurable, a number of useful plugins that other plugins could depend on, such as an Outlook layout, menuing, toolbars, drag/drop, history, undo/redo, macro record, open/save/search/find/replace dialogs, and so forth. This isn't just for an IDE though. The developer using the framework could deploy the basic app with the plugins of his/her choice, and add to it with his/her own plugins.
    Soooo, after this long post, what I am getting at is if anyone would be interested in helping out with ideas, feedback, testing, core framework plugins, and so forth. At this time I am keeping the code closed, but will probably public domain it, open source it, or whatever. The finished framework should make it easy for anyone to quickly build useable applications, and if all goes well, I'd like to set up a site with a location for 3rd party plugins to be uploaded, for download, comments, etc. Being a web developer, I myself will probably work on some plugins for Web Services, web stress testing, and so forth. I have lots of ideas for useable plugins.
    On that note, one application I am personally working on for my own use, is a simple yet possibly robust internet suite of apps. I want to incorporate FTP, Email, NewsGroup, and IRC/AOL IM/Yahoo IM/MSN IM/ICQ chat into a single app. Every aspect of it would be plugins. Frankly, I hate outlook, Eudora is alright, but I want to do some things with the email app. I also want a single IM/Chat app that can talk with all protocols (not an easy task, take a look at GAIM). Newsgroups are handy to work with for developers and others of interest, as is FTP. But even more so, being able to have all in one big application framework that allows them to share data between each other, work with one another, and so forth is appealing to me, and being written in Java it could potentially work on many platforms, giving some platforms a possible nice set of internet apps to use. Being able to send an email to a mailing list AND have it posted to specific newsgroups at the same time without having to copy/paste, open up separate applications and so forth has appeal. Directly emailing from any chat or newsgroup link without another app starting up is a little faster as well. Those are just "small" things that could prove to be very kewl in a complete internet app. Adding a web browser, well, I don't think I want to go that route. But if there is already a decent Java built web browser, it shouldn't be too hard to add it as a plugin.
    So, if anyone is interested, by all means, drop a post to this thread, let me know of interest, feedback, ideas, point out bad things, and so forth. I appreciate all forms of communication.
    Thanks.

    Yes I do. I am using it now with my work related project.
    I am in fact reworking the engine a bit now. I want to incorporate the notion of services (like OSGi) where by a plugin can register services. These services are "global" in scope, meaning any plugin may request the use of a service. However, services, unlike plugins, are not guaranteed to be available. Therefore, plugins using services must be coded to properly handle this possibility. As an example, imagine an email application using my engine. One plugin may provide the email gateway, including the javamail .jar library and provide the email service. Other plugins, such as the one that provides the functionality for the SEND button, would "use" this service. At runtime, when the send button was pressed it would ask the engine for the email service. If available, off goes the email. If not, it could pop up a dialog indicating some sort of message that the email service is not available.
    I am at the VERY beginning stages in this direction so I'd love to have ideas, thoughts, suggestions as to how this might be implemented. I do believe though that it will provide for a more powerful engine. The nice thing is, while the engine will support static runtime plugins, it will also support dynamic services that can come and go during the runtime. The key is that plugins using services do not maintain references to them, but instead query the engine each time a plugin needs to use a service.
    Static plugins are those that are guaranteed to be available or if not, any dependent plugin is not allowed to load. That is, if A depends on B and B is not able to be loaded, A is unloaded as well as it can't perform its job without B; it depends on B in some manner to complete its function. Imagine a plugin adding an option panel to the Preferences page only that the Preferences plugin is not loaded. It just can't work. However, with some work, there could be variations on this. That is, a plugin may provide a menu item as well as a preferences page. If the preference plugin is not available, then the plugin may simply still work via the menu item, but have no preferences panel available. This should be configurable via the plugin-conf.xml config file. However, as I have it now, using extension points and extensions like Eclipse does, it is also possible that if the Preferences plugin isn't loaded, it wont look for ANY extensions extending its extensino point, and therefore the plugins could all still run but there would simply be no preferences page. So, I am not entirely sure yet which way is best for this to work.
    My engine, as it stands now, allows for separate classloader plugin loading, it automatically resolves all dependencies by creating the plugin registry each time the engine is started up. To speed up plugin loading, it maintains a plugins.xml file in the root dir that keeps track of each plugin that was loaded and its last timestamp. Plugins can be open directory files or jarred up into .PAR files (think .WAR or .EAR files). The engine can find .par or open-dir plugins in multiple locations (including URL locations for direct .par files). When it finds a .par file, it first decompresses the .par file to a plugin work directory. Every plugin must have a plugin-conf.xml in its root dir, and either a /classes dir where compiled classes are, or a .jar file in the root path of the plugin, where the /classes dir superscedes the .jar file. Alternatively, anything in a /lib dir is automatically picked up as part of the plugin classpath. So a plugin that wraps the xerces.jar file can simply place the xerces.jar in the /lib dir and automatically present the xerces library to all dependent plugins (which can import the xerces classes but not need to distribute the xerces.jar file if a plugin they depend on has it in its /lib dir). The "parent lookup" process goes only one parent level deep. That is, if plugin A depends on a class in a /lib/*.jar file in plugin B, then the engine will resolve the class (through delegation) of plugin B. But if A depends on B, B depends on C where plugin C's /lib/*.jar file contains a class A is looking to use, this will not work and A will throw a ClassNotFoundException. In other words, the parent lookup only goes as far as the classpath of all dependent plugins, not up the chain of all dependent plugins. Eclipse allows each plugin to "export" various classes, or packages, or entire .jar files and the lookup can go all the way up the chain if need be. I haven't yet found a big reason for supporting this, so I am not too concerned with that at this point. The engine does support reloadable plugins although I have not yet implemented it. Because each plugin information object is stored in a Map keyed on the plugins GUID (found in the plugin-conf.xml file), it is easy enough to load a new plugin (since they get their own classloader) and replace the object at the GUID key and now have a reloaded plugin. The harder part is properly notifying all dependent plugins of the reload and what to do with them. Therefore I have not quite yet implemented this feature although the first step can easily be done, so long as nobody minds the "remnants" of older plugins laying around and possibly not being garbage collected.
    All of this works now, and I am using it. I do NOT have a generic UI framework just yet. I am working on that now. Eclipse has a very nice feature in that every plugin.xml file builds up the UI without any plugin code ever being created or ran. I am working on something like that now, although I am focussed more on the aspect of the engine at this point.
    Two things keep me going. First, the shear fun of working on this and seeing it succeed, even if a little bit. Second, while I love the idea of Eclipse, OSGi and other engines, so far I have yet to find one that is very easy to write plugins for, is very small, and is "generic" enough for any use. Some may argue JBoss core, at 29K can do this. I don't know if it can. It is built around JMX and I don't know that I agree JMX is the "ultimate" core plugin engine for all types of apps. Not that mine is either, but I'd like to see what I am working on become that if possible. Currently, with an xml parser (www.xmlpull.org) added as part of the code, my engine is about 40K with debug info, maybe about 28K without. I expect it to grow a bit more with services, reloadable/unloadable code, and some other stuff. However, I am thinking it will still be around 50K in size and in my opinion, with an xml read/write parser (very fast one at that), extension/extensino points, services, dependencies, multiple versions of plugins (soon), load/unload/reload capabilities, .par management (unjar into work dir, download .par files from urls, etc) and open directory capabilities, inidividual classloaders, automatic dependency resolution, dynamic dependency resolution and possibly even more, I think what my engine offers (and will offer) is pretty cool in my book.
    None the less, there is always room for improvement. One of the things I pride myself on is using as little code and keeping the code neat and easily readable, not to mention as non-archaic as possible, makes for an easily maintainable project.
    So, having said all that, YES, the engine can be used as is right now. It does not reload plugins, but you can dynamically load plugins, handle dependency resolution, have a very fast xml read/write parser at your disposal for any plugin, and for the most part easily write plugins. That is all possible now. I should put the engine I have now up on my generic-plugin-engine sourceforge project one of these days, perhaps soon I will do that! While I have no problem handing out the code, I am currently the only committer and I don't have it loaded into CVS at this point. I would like to do so very soon.
    So, if you are interested, by all means, let me know and I'll be happy to send you what I have, and love to have more help on the next version of this.

  • Unable to build project after checkout from CVS

    When we import our java source files to the CVS module and later check the files out,
    we cannot compile the files using JDeveloper. We get the following error from JDeveloper in the messages window: Internal compiler error.
    We can, however, compile the source files using javac. Any ideas?
    Environment:
    Java IDE: Oracle 9i JDeveloper 9.0.2.829 running on Win2k, using Oracle OJVM
    CVS Server: CVS 1.11.2 running on Red Hat 8 (rsh enabled)
    CVS Client: JCVsII v.5.3.2 connecting using server mode

    When checking the module (files) out from CVS do you get the same number and content as the before the import.

  • Best practice: team based development with JDeveloper and CVS?

    Hi all!
    I was wondering of what is the best way to work with Jdev 9i and cvs on a same project with 5 developers whereas
    all developers use the same JPRs and JWS?
    Which of the files should be checked in to the central CVS repository, which should be remained on the local machine of each developer?
    I assume, all java and xml files might be stored in the cvs repository. But how can we make sure that new files written by a member of the team will be added to my project?
    If we also check in the JPR and CFG files the merge of concurrent JPRs will fail and our project will be shreddered ;-)
    My question: what is your best experience with simultanous development on same projects?
    Any idea?
    Many thanks,
    Stefan

    1. Put everything that your project needs under CVS control:
    - buildscripts
    - BC4J jarfiles
    - BC4J generated files (java, xml, xcfg, jpx, cpx)
    - .properties, package.html, gif, ...
    - docu
    - install scripts
    - starter batch or exec, ...
    2. Each developer should have a own JPR and JWS file. In a seperate location those files could be checked in frequently to easy allow the setup of a new Developer workstation
    3. Use a sourcecode formatter (e.g. jalopy) on the BC4J generated java files to reduce merge conflicts because of empty lines generated by JDev dialogs
    4. Adding new files is no problem:
    - if you add new BC4J objects (AM, VO, ASC, EO, VL) also checkin the bc4j package XML so new files will be added.
    - set in project settings common / input paths - "Scan source path ..."
    5. Deleting BC4J files is a problem, because JDev does not automatically remove them from the project. So if someone deletes BC4J objects, you should close the project and delete it manually from the jpr file
    6. Close the project before making a CVS update wih external tool like Tortoise because of JDev caches
    Regards, Markus

  • Cvs problem in flex builder 2.0

    Hi, everyone, I have installed my flex builder 2.0 without
    problem, and i have created some apps using it.
    It is really good.
    But the question is , when I set up a CVS client in flex
    builder, i can only do commit / update / synch, Why the Edit /
    Unedit menus are disabled? Dose anyone know this? I think it is
    very inconvenient.
    Thank you

    Hi There
    I am wondering if you can help me, i am looking to build a simple flex front end with a java back end, just to send simple requests and responses. By the sounds of your email you have an idea of how to do this, i am using websphere as my server, i am wondering if you have any links to tutorials on how to do this or any sample projects yourself. Any help would be amazing
    Thanks

  • Help Required in JAVA CVS

    Hai
    I am using java cvs for my application.First i created the pserver
    connection its work well in stand alone code but when I am using this code in jsf (Using Sun Studio Creator ) . the code below take my application in a non stoping loop on browser ,where browser is waitting for reply .....and the progress bar is rotating continuesly ....pls help me
    Thnx in advance
    Here is my Source Code
    try{
    PServerConnection c =new PServerConnection();
    PServerConnection connection = c;
    Scrambler scrambler = StandardScrambler.getInstance();
    c.setUserName("username ");
    c.setHostName("host name ");
    c.setRepository("/repository name ");
    c.setEncodedPassword(scrambler.scramble("password "));
    c.open(); // this code is not running properly
    System.out.println("connection created");
    catch(Exception e)
    System.out.println("connection not created");
    e.printStackTrace();
    Regards
    Raviraj Gangrade

    Please check online help:
    http://developers.sun.com/prodtech/javatools/jscreator/reference/docs/help/2update1/vcs-nb/vcs_cvs.html
    and
    Source Code Control Features in the Sun Java Studio Creator 2 IDE
    http://developers.sun.com/prodtech/javatools/jscreator/reference/fi/2/source-code-control.html

  • Invalid entry size using internal CVS

    Using internal CVS in jdev 10.1.3 i get "Invalid entry size" when a binary file is checked out from the repository. The cvs serve has the right flag set for zip file .
    Any ideas?

    Are you using a CVSNT server?
    We have an open bug about some incompatability between the internal client and CVSNT when using wrappers.

  • Creator 2 - CVS Issue

    Gentlemen,
    I have the following issue, and I was wondering if I am the only one or if I missed something
    (I am a new Java Studio Creator 2 user, but I am fairly proficient in CVS)..
    I started by installing Creator on my work computer.. Created a project and imported it into my CVS (so far, so good)..
    Next, I went home - wanting to do a bit more on that project, so I installed Creator on my home computer..
    Checked out my project from my CVS server on which I initially imported the project (that worked fine) and started working on it. Made a few changes here and there - so I thought it was time to commit the changes back to my CVS server..
    Unfortunatelly, none of the files/folders show any 'CVS' contextual menu item. Neither in the 'file' tab, nor in the 'version revision' tab.
    Mind it, it is NOT a big deal (I am sure I can still manually commit the changes back using a command line CVS) - but it is a little annoying not to be able to do it from the IDE.
    At this point I know Studio Creator CVS still works because I can still issue a 'checkout' to update my local work repository in respect to the server repository (it just tells me the files I worked on are locally modified ('M' status) - I am the only one working on that project).
    Also note that I didn't have that problem on the computer from which I issued the initial CVS import.
    Are there any workarounds ? Did I miss something - or is this a bug ?
    Thanks,
    --Ivan                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

    Hi,
    I choose a "Use build in CVS client" in the IDE
    Unfortunatelly, none of the files/folders show any 'CVS' contextual menu item. Neither in the 'file' tab, nor in the 'version revision' tab.I can see the folders/files under versioning tab.
    MJ

  • I'm trying to import contacts to outlook 2011 from a cvs file.  I've followed all the steps, mapped the fields, clicked import -- it shows that it imported my 900 contacts --- but the contact field remains blank and I can't see any information.  Help?

    I'm trying to import contacts to outlook 2011 from a cvs file.  I've followed all the steps, mapped the fields, etc.  It appears to import all 900 contacts, but after finishing nothing appears in my contact list.  Any ideas on why this is not working?

    Option 1
    Back Up and try rebuild the library: hold down the command and option (or alt) keys while launching iPhoto. Use the resulting dialogue to rebuild. Choose to Repair Database. If that doesn't help, then try again, this time using Rebuild Database.
    If that fails:
    Option 2
    Download iPhoto Library Manager and use its rebuild function. (In early versions of Library Manager it's the File -> Rebuild command. In later versions it's under the Library menu.)
    This will create an entirely new library. It will then copy (or try to) your photos and all the associated metadata and versions to this new Library, and arrange it as close as it can to what you had in the damaged Library. It does this based on information it finds in the iPhoto sharing mechanism - but that means that things not shared won't be there, so no slideshows, books or calendars, for instance - but it should get all your events, albums and keywords, faces and places back.
    Because this process creates an entirely new library and leaves your old one untouched, it is non-destructive, and if you're not happy with the results you can simply return to your old one.  
    Regards
    TD

  • Some init script ideas

    I'm packaging GNUstep... again (this time from CVS), and it requires a certain shell script to be sourced by every shell.  That's easy enough for any user to do by modifying their .foorc and adding it in there, but I thought I would just throw a command in the postinstall to symlink the shell script to /etc/profile.d, so every login shell would automatically source it.
    That's all fine and dandy, but xterms by default aren't login shells, so anybody using an xterm would find themselves without a working GNUstep build environment unless they set xterm to be a login shell in their .Xdefaults.
    Change #1) Make xterms login shells by default in Arch's xfree86 distribution
    Now that xterms are all login shells, and we can expect a user's shell to be a login shell by default, why not add scripts to profile.d for the various self-contained distributions that live in /opt (like gnome, kde, java, etc.) that automatically set up the paths for those binaries.  At the moment, it appears you have to do this manually (I have not installed gnome or kde though, maybe those maintainers are doing this).
    Change #2) Have packages that live in /opt add a script to profile.d setting up the paths and environment necessary to use those packages.
    Thirdly, there's two daemons that need to be started for GNUstep when xfree86 starts (specifically gdnc and gpbs).  Once again, this is something that can easily be done by a user editing their .Xsession, but I'd rather automate it.  In the spirit of Arch's init script system, I think it would be wise to make an xinit.d directory (possibly in /etc, or more to the keeping of X's directory structure, in /etc/xinit.d), full of files that are run whenever X is started.  These could be enabled and disabled in rc.conf just like init scripts in rc.d currently are.
    Change #3) Add an xinit.d directory, in the spirit of rc.d and profile.d, full of items that are run whenever X is started.
    Perhaps some of this has already been done, or perhaps there are other solutions.  I just think these changes would be well in the spirit of Arch, and a boon to package maintainers.
    Please give me your feedback either here or by mailing me at [email protected]
    -- Michael Baehr

    sarah31 wrote:most if not all /opt package sets have profile.d scripts.  some may not think so but one has to log out then in to enable them.
    Good Looks like profile.d is being used for the right purpose.  I should've expected it would be  :oops:
    Xentac wrote:Ok... but what happens if I'm running blackbox?  I probably don't want gdnc and gpbs started when I start X, it'd make me cry.  How would xinit.d help with that?
    I don't see why it'd make you cry.  Let me explain what gdnc and gpbs are:
    GNUstep Distributed Notification Center
    Handles messaging and notification between GNUstep applications
    GNUstep Pasteboard Server
    Handles rich copy and paste between GNUstep applications
    Basically, none of these would impact whatever environment you'd be running, and only GNUstep apps would use them.  In fact, you wouldn't even realize they were running as they'd just be sitting in the background like the valiant daemons they are, waiting to be called into service for the holy emperor GNUstep  :twisted:
    And in any case, with the whole xinit.d idea, you should be able to just disable it with a ! if you don't like it
    I might as well throw in my last idea, which I forgot to write before.
    I've noticed people complaining about Pacman upgrades wiping their pacman.conf, which is a problem if you're using several people's TURs and other external repositories like I am.  Instead, I propose doing what APT has been doing in a recent version with its new sources.list.d layout... having one directory (pacman.repo.d or something of its ilk) storing files for each repository, and enabling and disabling them in pacman.conf the way you do in rc.conf (with a bang).  This would be especially nice because somebody could install a package called, for example, "pacman-TUR", and have a repo file for each TUR, and then be able to enable or disable them.  This package would be updated every time a new TUR was added or one was removed, thus enabling people to easily track the latest repository happenings, while still maintaining control over what repos they use.
    Just a thought.

  • Moving existing project to an IDE

    I am considering moving an existing, 12-year old, java [url http://r0k.us/graphics/SIHwheel.html]hobbyist project to an IDE. Up to now, development has been with the Textpad text editor, the msWindows cmd tool, and the CVS version system. Before starting, I'd like to get some opinions on what kinds of things should be considered, and the current strengths and weaknesses of the various IDEs. I've heard good thanks about Netbeans, Eclipse, and Intellij.
    Here are some of the parameters involving the project:
    1) two project directories
    1a) ntc, with 3 java files
    1b) sihwheel, with 5 java files, and some resources (5 properties files, 2 html files, and a PNG image)
    2) 33 classes get created, then jar'red along with the resources of 1b
    3) application normally runs as an applet, but can run as program
    4) open source project
    5) no build support (ant, etc) yet
    Hmm, should I implement build support first, or do at same time? The distributed pieces of the build would be SIHwheel.jar, SIHwheel.html, and SIH_source.zip. Since they can run stand-alone, I also distribute, from the ntc project, NTC.java ColorName.java, and Hilb.java.
    Some of my desires for an IDE are:
    1) not complex to learn or use
    2) easy import of the above "raw" project without forcing major reconfiguration of directory structure, etc.
    3) ability to still make q&d changes with Textpad, javac, and jar
    4) debug support
    5) multi-platform supprt (I normally run Windows 7, but can boot into Ubuntu Linux)
    Consider those to be more of a wish list than requirements. I'm flexible ;) and would like the IDE to be flexible as well.
    -- Rich

    Just focusing on that point (the rest are answered already I think)
    Hmm, should I implement build support first, or do at same time? The distributed pieces of the build would be SIHwheel.jar, SIHwheel.html, and SIH_source.zip.Note that both Eclipse and NB have "single-target" builds by default (htey build a jar, or, in the case of a J2SE app in NetBeans, a 'dist' folder with a main jar and dependencies). Building several targets will require manual configuration on your part.
    In the case of NetBeans, whose build system relies on Ant (generated Ant build files, which you can manually edit afterwards), building several targets (one jar, one HTMl page, one zip) will require to customize the generated build script, so you'll have to learn Ant anyway.
    In the case of Eclipse, I know you can "use" an Ant build script, which you create and edit through the text editor, but it does not natively generates it (there are probably plugins that do that). I think there are wizards to request the creation/copy of so-and-so (Jar file I'm sure, copy too, zip I don't know), which you can customize at least per source folder, but this customization won't propagate automatically to the Ant build script.
    The last time I used that, Jar generation was "manual" (a menu item somewhere trigerred a "generate jar" build), but this may have evolved.
    I can't comment on IDEA, never used it.

  • CVS commit comment no longer populated from task after upgrade to Mars

    Hi,
    after I upgraded to Eclipse Mars, I no longer get the CVS commit comment populated from the active Mylyn Bugzilla task.
    I have installed:
    Eclipse Mars CDT edition
    Mylyn Context Connector: C/C++ Development
    Mylyn Context Connector: Eclipse IDE
    Mylyn Task List
    Mylyn Task-Focused Interface
    Mylyn Versions Connector: CVS
    Mylyn Versions Connector: Git
    In the preferences under Mylyn, Team I have the same Commit Comment Template that I used with all previous Eclipse/Mylyn versions:
    ${task.key}: ${task.description}
    ${task.url}
    What is missing to get the CVS commit comment populated again from the active Mylyn Bugzilla task?
    Thanks
    Stephan

    Sam Davis wrote on Wed, 15 July 2015 10:56CVS support has been moved to a separate feature. You'll need to install Mylyn Context Connector: CVS Support (org.eclipse.mylyn.team.cvs).
    Thanks for the suggestion, Sam. Unfortunately, I can't find the "Mylyn Context Connector: CVS Support" when I select the "Mars - http://download.eclipse.org/releases/mars" update site. Must I get that from another update site?
    Note that I forgot one relevant plugin in my initial question. In addition to the plugins listed there I also have installed the "Mylyn Context Connector: Team Support" plugin. So the full list of Mylyn plugins that I have installed is:
    Mylyn Context Connector: C/C++ Development
    Mylyn Context Connector: Eclipse IDE
    Mylyn Context Connector: Team Support
    Mylyn Task List
    Mylyn Task-Focused Interface
    Mylyn Versions Connector: CVS
    Mylyn Versions Connector: Git
    Stephan

  • Versioing: CVS Status not displaying new repository files

    I did some quick searches and did not find this in the forum. Hopefully it is not a duplicate. If so, please point me to the original thread.
    We have a multi-developer project underway with Creator. We are using CVS for the shared source repository. When one developer checks in a new file and other developer's use the CVS Status option, the new files do not display as 'Needs Checkout'. Of course, any files the user already has checked out display with the appropriate status but new repository files are missing.
    Since we use CVS extensively, right now the developers communicate the new files and then check them out via command line CVS. We have the IDE set in 'expert' mode, etc.
    Is this a know bug or is there some setting for the IDE to get it to display files with status 'Needs Checkout' from the cvs repository?

    As per the documentation at:
    http://ximbiot.com/cvs/manual/cvs-1.11.22/cvs_10.html
    Based on what operations you have performed on a checked out file, and what operations others have performed to that file in the repository, one can classify a file in a number of states. The states, as reported by the status command, are:
    Needs Checkout
        Someone else has committed a newer revision to the repository. The name is slightly misleading; you will ordinarily use update rather than checkout to get that newer revision.
    ...Thus the 'cvs status' command works only on local files and 'Needs Checkout', from cvs documentation itself, is a misnomer and is really more like 'Needs Update'.
    In Creator (and NetBeans), the nodes on the cvs explorer actually map to physical files and thus do not display nodes for files that may only be in the server. But you do raise an interesting question though; creator could display virtual nodes for files not in local workspace thereby providing a preview before actual checkout...

  • [Development] Code::Blocks C/C++ IDE

    heres an incomplete pkgbuild for code::blocks.
    the trouble is it uses a non-standard build script, once it compiles, the files are all over the place, i have no idea where the files are, or where to put them!! (hence the return 1 in the pkgbuild)
    any help packaging this would be good since im after a replacement for Anjuta (its code completion is basically non-existant, its auto-formatting and `indent' code is terrible.. bleugh)
    here is the wiki for its installation, its not much help at all
    http://codeblocks.sourceforge.net/wiki/ … distros%29
    #contributor: Adam Griffiths <adam_griffithsAATTdart.net.au>
    pkgname=codeblocks
    pkgver=1.0beta7
    pkgrel=1
    pkgdesc="Code::Blocks is a free C/C++ IDE built specifically to meet the most demanding needs of its users. It has been designed, right from the start, to be extensible and configurable..."
    url="http://www.codeblocks.org/"
    license="GPL"
    depends=('wxgtk')
    source=(http://dl.sourceforge.net/sourceforge/$pkgname/$pkgname-1.0-beta7.tar.gz)
    build()
    cd $startdir/src/$pkgname-1.0-beta7/src/
    make -f Makefile.unix prefix=$startdir/pkg
    return 1
    md5sums=('c5f5d3c1e9bb29d9d746a153e781015b')

    This works fine:
    #contributor: Adam Griffiths <adam_griffithsAATTdart.net.au>
    #contributor: Bart Leusink <[email protected]
    pkgname=codeblocks-cvs
    pkgver=20051003
    pkgrel=2
    pkgdesc="Code::Blocks is a free C/C++ IDE built specifically to meet the most demanding needs of its users. It has been designed, right from the start, to be extensible and configurable..."
    url="http://www.codeblocks.org/"
    depends=('wxgtk')
    conflicts=(codeblocks)
    provides=(codeblocks)
    source=()
    md5sums=()
    build()
    cvs -z5 -d:pserver:[email protected]:/cvsroot/codeblocks co -P codeblocks
    cd $startdir/src/codeblocks/
    ./bootstrap
    ./configure --prefix=/usr
    make
    make DESTDIR=$startdir/pkg install
    find $startdir/pkg -name '*.la' -exec rm {} ;
    (cvs is the recommended way of installing codeblocks for linux)

Maybe you are looking for