VLAN Best Practice For IT Computers

In an environment with basic VLANs (Servers, network infrastructure, workstations, IoT devices, wireless), is it commonly perceived as better to place IT workstations (around 6 or so) in their own VLAN?
I suppose I am debating whether to create access control rules for a single IT VLAN or to stick them elsewhere and possibly have to create the same rules for each separate machine if the VLAN they're in does not share the same rules.  Initially all our VLANs will be without access control rules but will be dialed down incrementally, with testing.  Maybe that doesn't make sense...
Where do others find IT workstations fit best, in their own VLAN or tucked away elsewhere (and with what)?
This topic first appeared in the Spiceworks Community

Whether you have multiple sites or not, it's good practice to put your APs in a separate and dedicated VLAN. 
If your sites are routed sites, then you can re-use the same VLAN numbers but make sure they are on separate subnets and/or VRF instance.

Similar Messages

  • Best practice for Wireless ap vlan

    Is there a best practice for grouping lightweight access points in one vlan or allowing them to be spread across several ??

    Whether you have multiple sites or not, it's good practice to put your APs in a separate and dedicated VLAN. 
    If your sites are routed sites, then you can re-use the same VLAN numbers but make sure they are on separate subnets and/or VRF instance.

  • Best practices for 1000v CTRL/PKT/MGT VLANs

    We are getting ready to start testing the 1000v in a lab environment, but have a question about VLAN best practices. In the current Cisco 1000v guides it indicates the control/packet/management interfaces should all be on the same VLAN. But elsewhere I've seen references to each network type getting its own VLAN. Given the latest release of the 1000v VSM/VEM, what is the real world best pracices for these three interfaces? Would you typically put all three on the same management VLAN used for physical switches?

    You can safely put them on the same vlan. The call to put them on different vlans was in the initial 1.1 release. We changed that view with the 1.2 release and fully support and recommend putting them all on the same vlan. The only requirement now is that we need no more than 100ms latency on the control network.
    louis

  • What are best practices for managing my iphone from both work and home computers?

    What are best practices for managing my iphone from both work and home computers?

    Sync iPod/iPad/iPhone with two computers
    Although it isn't possible to sync an Apple device with two different libraries it is possible to sync with the same logical library from multiple computers. Each library has an internal ID and when iTunes connects to your iPod/iPad/iPhone it compares the local ID with the one the device normally syncs with. If they are the same you can go ahead and sync...
    I have my library cloned to a small 1Tb USB drive which I can take between home & work. At either location I use SyncToy 2.1 to update the local copy with the external drive. Mac users should be able to find similar tools. I can open either of the local libraries or the one on the external drive and update the media content of my iPhone. The slight exception is Photos which normally connects to a specific folder on a specific machine, although that can easily be remapped to the current library if you create a "Photos" folder inside the iTunes Media folder so that syncing the iTunes folders keeps this up to date as well. I periodically sweep my library for new files & orphans withiTunes Folder Watch just in case I make changes at one location but then overwrite the library with a newer copy from the other. Again Mac users should be able to find similar tools.
    As long as your media is organised within an iTunes Music or Tunes Media folder, in turn held inside the main iTunes folder that has your library files (whether or not you let iTunes keep the media folder organised) each library can access items at the same relative path from the library folder so the library can be at different drives/paths on different machines. This solution ensures I always have adequate backups of my library and I can update my devices whenever I can connect to the same build of iTunes.
    When working with an iPhone earlier builds of iTunes would remove any file not physically present in the local library, even if there was an entry for it, making manual management practically redundant on the iPhone. This behaviour has been changed but it will still only permit manual management with a library that has the correct internal ID. If you don't want to sync your library between machines on a regular basis just copy the iTunes Library.itl file from the current "home" machine to any other you want to use, then clean out the library entires and import the local content you have on that box.
    tt2

  • Best practices for setting up users on a small office network?

    Hello,
    I am setting up a small office and am wondering what the best practices/steps are to setup/manage the admin, user logins and sharing privileges for the below setup:
    Users: 5 users on new iMacs (x3) and upgraded G4s (x2)
    Video Editing Suite: Want to connect a new iMac and a Mac Pro, on an open login (multiple users)
    All machines are to be able to connect to the network, peripherals and external hard drive. Also, I would like to setup drop boxes as well to easily share files between the computers (I was thinking of using the external harddrive for this).
    Thank you,

    Hi,
    Thanks for your posting.
    When you install AD DS in the hub or staging site, disconnect the installed domain controller, and then ship the computer to the remote site, you are disconnecting a viable domain controller from the replication topology.
    For more and detail information, please refer to:
    Best Practices for Adding Domain Controllers in Remote Sites
    http://technet.microsoft.com/en-us/library/cc794962(v=ws.10).aspx
    Regards.
    Vivian Wang

  • Best Practices for Using Photoshop (and Computing in General)

    I've been seeing some threads that lead me to realize that not everyone knows the best practices for doing Photoshop on a computer, and in doing conscientious computing in general.  I thought it might be a good idea for those of us with some exprience to contribute and discuss best practices for making the Photoshop and computing experience more reliable and enjoyable.
    It'd be great if everyone would contribute their ideas, and especially their personal experience.
    Here are some of my thoughts on data integrity (this shouldn't be the only subject of this thread):
    Consider paying more for good hardware. Computers have almost become commodities, and price shopping abounds, but there are some areas where spending a few dollars more can be beneficial.  For example, the difference in price between a top-of-the-line high performance enterprise class hard drive and the cheapest model around with, say, a 1 TB capacity is less than a hundred bucks!  Disk drives do fail!  They're not all created equal.  What would it cost you in aggravation and time to lose your data?  Imagine it happening at the worst possible time, because that's exactly when failures occur.
    Use an Uninterruptable Power Supply (UPS).  Unexpected power outages are TERRIBLE for both computer software and hardware.  Lost files and burned out hardware are a possibility.  A UPS that will power the computer and monitor can be found at the local high tech store and doesn't cost much.  The modern ones will even communicate with the computer via USB to perform an orderly shutdown if the power failure goes on too long for the batteries to keep going.  Again, how much is it worth to you to have a computer outage and loss of data?
    Work locally, copy files elsewhere.  Photoshop likes to be run on files on the local hard drive(s).  If you are working in an environment where you have networking, rather than opening a file right off the network, then saving it back there, consider copying the file to your local hard drive then working on it there.  This way an unexpected network outage or error won't cause you to lose work.
    Never save over your original files.  You may have a library of original images you have captured with your camera or created.  Sometimes these are in formats that can be re-saved.  If you're going to work on one of those files (e.g., to prepare it for some use, such as printing), and it's a file type that can be overwritten (e.g., JPEG), as soon as you open the file save the document in another location, e.g., in Photoshop .psd format.
    Save your master files in several places.  While you are working in Photoshop, especially if you've done a lot of work on one document, remember to save your work regularly, and you may want to save it in several different places (or copy the file after you have saved it to a backup folder, or save it in a version management system).  Things can go wrong and it's nice to be able to go back to a prior saved version without losing too much work.
    Make Backups.  Back up your computer files, including your Photoshop work, ideally to external media.  Windows now ships with a quite good backup system, and external USB drives with surprisingly high capacity (e.g., Western Digital MyBook) are very inexpensive.  The external drives aren't that fast, but a backup you've set up to run late at night can finish by morning, and if/when you have a failure or loss of data.  And if you're really concerned with backup integrity, you can unplug an external drive and take it to another location.
    This stuff is kind of "motherhood and apple pie" but it's worth getting the word out I think.
    Your ideas?
    -Noel

    APC Back-UPS XS 1300.  $169.99 at Best Buy.
    Our power outages here are usually only a few seconds; this should give my server about 20 or 25 minutes run-time.
    I'm setting up the PowerChute software now to shut down the computer when 5 minutes of power is left.  The load with the monitor sleeping is 171 watts.
    This has surge protection and other nice features as well.
    -Noel

  • Networking "best practice" for setting up a farm

    Hi all.
    We would like to set an OracleVM farm, and I have a question about "best practice" for
    configuring the network. Some background:
    - The hardware I have is comprised of machines with 4 gig-eth NICs each.
    - The storage will be coming primarily from a backend NAS appliance (Netapp, FWIW).
    - We have already allocated a separate VLAN for management.
    - We would like to have HA capable VMs using OCFS2 (on top of NFS.)
    I'm trying to decide between 2 possible configurations. The first would keep physical separation
    between the mgt/storage networks and the DomU networks. The second would just trunk
    everything together across all 4 NICs, something like:
    Config 1:
    - eth0 - management/cluster-interconnect
    - eth1 - storage
    - eth2/eth3 => bond0 - 8021q trunked, bonded interfaces for DomUs
    Config 2:
    - eth0/1/2/3 => bond0
    Do people have experience or recommendation about the best configuration?
    I'm attracted to the first option (perhaps naively) because CI/storage would benefit
    from dedicated bandwidth and this configuration might also be more secure.
    Regards,
    Robert.

    user1070509 wrote:
    Option #4 (802.3ad) looks promising, but I don't know if this can be made to work across
    separate switches.It can, if your switches support cross-switch trunking. Essentially, 802.3ad (also known as LACP or EtherChannel on Cisco devices) requires your switch to be properly configured to allow trunking across the interfaces used for the bond. I know that the high-end Cisco and Juniper switches do support LACP across multiple switches. In the Cisco world, this is called MEC (Multichassis EtherChannel).
    If you're using low-end commodity-grade gear, you'll probably need to use active/passive bonds if you want to span switches. Alternatively, you could use one of the balance algorithms for some bandwitch increase. You'd have to run your own testing to determine which algorithm is best suited for your workload.
    The Linux Foundation's Net:Bonding article has some great information on bonding in general, particularly on the various bonding methods for high availability:
    http://www.linuxfoundation.org/en/Net:Bonding

  • Best practice for managing a Windows 7 deployment with both 32-bit and 64-bit?

    What is the best practice for creating and organizing deployment shares in MDT for a Windows 7 deployment that has mostly 32-bit computers, but a few 64-bit computers as well? Is it better to create a single deployment share for Windows 7 and include both
    versions, or is it better to create two separate deployment shares? And what about 32-bit and 64-bit versions of applications?
    I'm currently leaning towards creating two separate deployment shares, just so that I don't have to keep typing (x86) and (x64) for every application I import, as well as making it easier when choosing applications in the Lite Touch installation. But I know
    each deployment share has the option to create both an x86 and x64 boot image, so that's why I am confused. 

    Supporting two task sequences is way easier than supporting two shares. Two shares means two boot media, or maintaining a method of directing the user to one or the other. Everything needs to be imported or configured twice. Not to mention doubling storage
    space. MDT is designed to have multiple task sequences, why wouldn't you use them?
    Supporting multiple task sequences can be a pain, but not bad once you get a system. Supporting app installs intelligently is a large part of that. We have one folder per app install, with a wrapper vbscript that handles OS detection. If there are separate
    binaries, they are placed in x86 and x64 subfolders. Everything runs from one folder via the same command, "cscript install.vbs". So, import once, assign once, and forget it. Its the same install package we use for Altiris, and we'll be using a Powershell
    version of it when we fully migrate to SCCM.
    Others handle x86 and x64 apps separately, and use the MDT app details to select what platform the app is meant for. I've done that, but we have a template for the vbscript wrapper and its a standard process, I believe its easier. YMMV.
    Once you get your apps into MDT, create bundles. Core build bundle, core deploy bundle, Laptop deploy bundle, etcetera. Now you don't have to assign twenty apps to both task sequences, just one bundle. When you replace one app in the bundle, all TS'es are
    updated automatically. Its kind of the same mentality as active directory. Users, groups and resources = apps, bundles and task sequences.
    If you have separate build and deploy shares in your lab, great. If not, separate your apps into build and deploy folders in your lab MDT share. Use a selection profile to upload only your deploy side to production. In fact I separate everything (except
    drivers) into Build and deploy folders on my lab server. Don't mix build and deploy, and don't mix Lab/QA and production. I also keep a "Retired" folder. When I replace an app, TS, OS, etcetera, I move it to the retired folder and append "RETIRED - " to the
    front of it  so I can instantly spot it if it happens to show up somewhere it shouldn't.
    To me, the biggest "weakness" of MDT is its flexibility. There's literally a dozen different ways to do everything, and there's no fences to keep you on the path. If you don't create some sort of organization for yourself, its very easy to get lost as things
    get complicated. Tossing everything into one giant bucket will have you pulling your hair out.

  • Best Practice for FlexConnect Wireless roaming in MediaNet environment?

    Hello!
    Current Cisco best practice recommendations for enterprise MediaNet design, specify that VLANs be local to a switch / switch stack (i.e., to limit the scope of spanning-tree). 
    In the wireless world, this causes problems if you want users while roaming to keep real-time applications up and running.  Every time they connect to a new AP on a different VLAN, then they will need to get a new IP address, which interrupts real-time apps. 
    So...best practice for LAN users causes real problems for wireless users.
    I thought I'd post here in case there's a best practice for implementing wireless roaming in a routed environment that we might have missed so far!
    We have a failover pair of FlexConnect 7510s, btw, configured for local switching for Internal users, and central switching with an anchor controller on the DMZ for Guest users.
    Thanks,
    Deb

    Thanks for your replies, Stephen and JSnyder.
    The situation here is that the original design engineer is no longer here, and the original design was not MediaNet-friendly, in that it had a very few /20 subnets bridged over entire large sites. 
    These several large sites (with a few hundred wireless users per site), are connected to an HQ location (where the 7510s in failover mode are installed) via 1G ethernet hand-offs (MPLS at the WAN provider).  The 7510s are new, and are replacing older contollers at the HQ location. 
    The internal employee wireless users use resources both local to their site, as well as centralized resources.  There are at least as many Guest wireless users per site as there are internal employee users, and the service to them consists of Internet traffic only.  (When moved to the 7510s, their traffic will continue to be centrally switched and carried to an anchor controller in the DMZ.) 
    (1) So, going local mode seems impractical due to the sheer number of users whose traffic bound for their local site would be traversing the WAN twice.  Too much bandwidth would be used.  So, that implies the need to use Flex / HREAP mode instead.
    (2) However, re-designing each site's IP environment for MediaNet would suggest to go routed to the closet.  However, this breaks seamless roaming for users....
    So, this conundrum is why I thought I'd post here, and see if there was some other cool / nifty solution I wasn't yet aware of. 
    The only other (possibly friendly to both needs) solution I'd thought of was to GRE tunnel a subnet from each closet to the collapsed Core / Disti switch at each site.  Unfortunately, GRE tunnels are not supported in the rev of IOS on the present equipment, and so it isn't possible to try this idea.
    Another "blue sky" idea I had (not for this customer, but possibly elsewhere in the future), is to use LAN switches such as 3850s that have WLC functionality built-in.  I haven't yet worked with the WLC s/w available on those, but I was thinking it looks like they could be put into a mobility group, and L3 user roaming between them might then work.  Do you happen to know if this might be a workable solution to the overall big-picture problem? 
    Thanks again for taking the time and trouble to reply!
    Deb

  • New mac - what is best practice for accounts?

    I am about to get a new mac (imac g5), and would like it to work well (ie file transfer and backup to from) my existing powerbook.
    Is there a best practice for account set up? should I use the same accounts between the two or can I set up a new account on the new mac?
    Related to this: what will key change sync give me? does that only work with the same accounts on two macs?
    thanks
    John

    With Tiger there is a migration assistant that will move everything over from your Powerbook to your new iMac G5. All you need is a firewire cable and when prompted in your first start-up select migration assistant and connect the two computers. You will need to boot up your Powerbook holding down the 't' key before you connect the two together. Good luck, Jack.

  • Best practices for updating agents

    We're getting ready to do our first system-wide update of agents to fix a critical bug. Our summer vacation is just ending, and teachers and students will be coming back very soon and turning on our Windows 7 computers for the first time in many weeks, although they won't all be turned on the same day. When they are turned on they will be attempting to get various updates, in particular Windows updates, but also Flash Player and Adobe Reader. I need to update the agents as quickly as possible, but I'm concerned about the possibility of the agent update conflicting with another update, especially Windows updates. Isn't it possible that Windows Update could restart a computer while the agent update is happening (or the other way around), leaving the machine in an unstable or unusable state? What are the best practices for dealing with this? I considered the possibility of deploying the agent to a dynamic workstation group whose members all have a certain file or files that indicate that they have already received the latest Windows updates. However, I can't see how to create a dynamic group based on such criteria.
    So far I have only updated a few devices at a time using "Deploy System Updates to Selected Devices in the Management Zone". When those updates are done I cancel that deployment because that's the only option I can find that does anything. If you can offer general advice for a better strategy of updating agents I'd appreciate that. Specifically, how would you push an agent update to several hundred computers that will be turned on sometime over the next two weeks?
    Thanks very much.

    Originally Posted by jcw_av
    We're getting ready to do our first system-wide update of agents to fix a critical bug. Our summer vacation is just ending, and teachers and students will be coming back very soon and turning on our Windows 7 computers for the first time in many weeks, although they won't all be turned on the same day. When they are turned on they will be attempting to get various updates, in particular Windows updates, but also Flash Player and Adobe Reader. I need to update the agents as quickly as possible, but I'm concerned about the possibility of the agent update conflicting with another update, especially Windows updates. Isn't it possible that Windows Update could restart a computer while the agent update is happening (or the other way around), leaving the machine in an unstable or unusable state? What are the best practices for dealing with this? I considered the possibility of deploying the agent to a dynamic workstation group whose members all have a certain file or files that indicate that they have already received the latest Windows updates. However, I can't see how to create a dynamic group based on such criteria.
    So far I have only updated a few devices at a time using "Deploy System Updates to Selected Devices in the Management Zone". When those updates are done I cancel that deployment because that's the only option I can find that does anything. If you can offer general advice for a better strategy of updating agents I'd appreciate that. Specifically, how would you push an agent update to several hundred computers that will be turned on sometime over the next two weeks?
    Thanks very much.
    To be honest, you have to work around your other deploys, etc. The ZCM agent isn't "aware" of other deploys going on. For example, ZPM doesn't care that you're doing Bundles at the same time (you'll get errors in the logs about the fact that only one MSI can run at a time, for example). ZPM usually recovers and picks up where it left off.
    Bundles on the other hand, with System Update, are not so forgiving. Especially if you have the agents prior to 11.2.4 MU1 (cache corruption errors).
    We usually:
    a) Halt all software rollouts/patching as best we can
    b) Our software deploys (bundles) are on event: user login Typically the system update is on Device Refresh, OR scheduled time, and are device associated.
    IF possible, I'd suggest that you use WOL, system update and voila.
    Or, if no WOL available, then tell your users to leave their pc turned on (doesn't have to be logged in), on X night, and setup your system updates for that night, with the auto-reboot enabled. That worked well
    But otherwise the 3 components of ZCM (Bundles, ZPM, System Update) don't know/care about each other, AFAIK.
    --Kevin

  • Best practice for intervlan routing?

    are there some best practices for intervlan routing ?
    I've been reading allot and I have seen these scenarios
    router on a stick
    intervlan at core layer
    intervlan at distribution layer.
    or is intervlan needed at all if the switches will do the routing?
    I've done all of the above but I just want to know what's current.

    The simple answer is it depends because there is no one right solution for everyone. 
    So there are no specific best practices. For example in a small setup where you may only need a couple of vlans you could use a L2 switch connected to a router or firewall using subinterfaces to route between the vlans.
    But that is not a scalable solution. The commonest approach in any network where there are multiple vlans is to use L3 switches to do this. This could be a pair of switches interconnected and using HSRP/GLBP/VRRP for the vlans or it could be stacked switches/VSS etc. You would then dual connect your access layer switches to them.
    In terms of core/distro/access layer in general if you have separate switches performing each function you would have the inter vlan routing done on the distribution switches for all the vlans on the access layer switches. The core switches would be used to route between the disribution switches and other devices eg. WAN routers, firewalls, maybe other distribution switch pairs.
    Again, generally speaking, you may well not need vlans on the core switches at all ie. you can simply use routed links between the core switches and everything else. 
    The above is quite a common setup but there are variations eg. -
    1) a collapsed core design where the core and distribution switches are the same pair. For a single building with maybe a WAN connection plus internet this is quite a common design because having a completely separate core is usually quite hard to justify in terms of cost etc.
    2) a routed access layer. Here the access layer switches are L3 and the vlans are routed at the access layer. In this instance you may not not even need vlans on the distribution switches although again to save cost often servers are deployed onto those switches so you may.
    So a lot of it comes down to the size of the network and the budget involved as to which solution you go with.
    All of the above is really concerned with non DC environments.
    In the DC the traditional core/distro or aggregation/access layer was also used and still is widely deployed but in relatively recent times new designs and technologies are changing the environment which could have a big impact on vlans.
    It's mainly to do with network virtualisation, where the vlans are defined and where they are not only routed but where the network services such as firewalling, load balancing etc. are performed.
    It's quite a big subject so i didn't want to confuse the general answer by going into it but feel free to ask if you want more details.
    Jon

  • Best-practice for Catalog Views ? :|

    Hello community,
    A best practice question:
    The situtation: I have several product categories (110), several items in those categories (4000) and 300 end-users.    I would like to know which is the best practice for segment the catalog.   I mean, some users should only see categories 10,20 & 30.  Other users only category 80, etc.    The problem is how can I implement this ?
    My first idea is:
    1. Create 110 Procurement Catalogs (1 for every prod.category).   Each catalog should contain only its product category.
    2. Assign in my Org Model, in a user-level all the "catalogs" that the user should access.
    Do you have any idea in order to improve this ?
    Saludos desde Mexico,
    Diego

    Hi,
    Your way of doing will work, but you'll get maintenance issues (to many catalogs, and catalog link to maintain for each user).
    The other way is to built your views in CCM, and assign these views to the users, either on the roles (PFCG) or on the user (SU01). The problem is that with CCM 1.0 this is limitated, cause you'll have to assign one by one the items to each view (no dynamic or mass processes), it has been enhanced in CCM 2.0.
    My advice:
    -Challenge your customer about views, and try to limit the number of views, with for example strategic and non strategic
    -With CCM 1.0 stick to the procurement catalogs, or implement BADIs to assign items to the views (I experienced it, it works, but is quite difficult), but with a limitated number of views
    Good luck.
    Vadim

  • Best Practice for Securing Web Services in the BPEL Workflow

    What is the best practice for securing web services which are part of a larger service (a business process) and are defined through BPEL?
    They are all deployed on the same oracle application server.
    Defining agent for each?
    Gateway for all?
    BPEL security extension?
    The top level service that is defined as business process is secure itself through OWSM and username and passwords, but what is the best practice for security establishment for each low level services?
    Regards
    Farbod

    It doesnt matter whether the service is invoked as part of your larger process or not, if it is performing any business critical operation then it should be secured.
    The idea of SOA / designing services is to have the services available so that it can be orchestrated as part of any other business process.
    Today you may have secured your parent services and tomorrow you could come up with a new service which may use one of the existing lower level services.
    If all the services are in one Application server you can make the configuration/development environment lot easier by securing them using the Gateway.
    Typical probelm with any gateway architecture is that the service is available without any security enforcement when accessed directly.
    You can enforce rules at your network layer to allow access to the App server only from Gateway.
    When you have the liberty to use OWSM or any other WS-Security products, i would stay away from any extensions. Two things to consider
    The next BPEL developer in your project may not be aware of Security extensions
    Centralizing Security enforcement will make your development and security operations as loosely coupled and addresses scalability.
    Thanks
    Ram

  • Best practice for multi-language content in common areas

    I've got a site with some text in header/footer/nav that needs to be translated between an English and Spanish site, which use the same design. My intention was to set up all the text as content to facilitate. However, if I use a standard dialog with the component's path set to a child of the current page node, I would need to re-enter the text on every page. If I use a design dialog, or a standard dialog with the component's path set absolutely, the Engilsh and Spanish sites will share the same text. If I use a standard dialog with the component's path set relatively (eg path="../../jcr:content/myPath"), the pages using the component would all need to be at the same level of the hierarchy.
    It appears that the Geometrixx demo doesn't address this situation, and leaves copy in English. Is there a best practice for this scenario?

    I'm finding that something to the effect of <cq:include path="<%= strCommonContentPath + "codeEntry" %>" resourceType ...
    works fine for most components, but not for parsys, or a component containing a parsys. When I attempt that, I get a JS error that says "design.path is null or not an object". Is there a way around this?

Maybe you are looking for

  • Free Text in Sales Order

    Hi , How can we fetch the free text for a Sales Order . Any inputs on the same would be very helpful. Thanks

  • Can I use multiple DAQmx Assitants in a single VI?

    Hello, I am working to create a VI that has 3 different incoming signals. I would really like to be able to program 3 DAQmx assistants, one for each of the incoming signals, which would be placed where the signal simulators are in the attached VI. Is

  • Move container from Global Shared Workarea to other workarea

    Hi, I (newbie) had my application in the Global Shared Workarea without version control, but in order to have more workareas and users, I enabled the version control. Right now, I have a new user and a new Workarea for that user, and I'd like to move

  • Possible to tell what lines a word appears on?

    Hello, want to know how i can tell what lines a specific word appears on with all the words in a matrix. EX "I went and ate pie because i am a hungry person and love pie" ok, now i have every single word in a seperate cell in a String matrix. Heres w

  • Transformation XML problem.

    Hello! I have very easy java code for transform XML document (sorry, for print out XML without changes): parser = new DOMParser(); parser.setPreserveWhitespace(false); // parser input XML file xmlURL = createURL(args[0]); parser.parse(xmlURL); xml =