Setting up a general Dev Machine: best practices

Hi
I would like to be less reliant on my local VMs for working with SharePoint and WordPress. So I would thought I would spin up new server fwith VS2015 Preview.  I guess it would  make smart choices  form the start so any advice
on these please
Users which user to create - I just  created a random user which I think must be local or default domain since it is not in my AD . Is it easy to add more?
VS2015 updates. - do these happen automatically
How easy is to migrate to another VM and take settings with me
Where to store my code / and project files -  I want to work on stuff in  mapped virtual / clolud  drive from any VM then check in to VSOnline once I am happy.  Are thee any recommendations
The RDP connection stability  and timeout. -  the former isn't that great but anyone set the latter:
Daniel
Freelance consultant

Hi Daniel,
Since this forum is to discuss:
Visual Studio WPF/SL Designer, Visual Studio Guidance Automation Toolkit, Developer Documentation and Help System, and Visual Studio Editor.
And you know that actually one thread for one issue, if possible, you could post different questions in different threads, of course, it seems that some issues are not the VS General issues, so I will provide you the correct forums.
>>Users which user to create - I just  created a random user which I think must be local or default domain since it is not in my AD . Is it easy to add more?
Do you mean that you want to know whether you could use the VS IDE if the current user is not the admin? I think it would be the VS setup issue, as far as I know, we would install the VS as the admin.
But you could post this issue to the VS setup forum for better response if I understanding this issue correctly.
https://social.msdn.microsoft.com/Forums/vstudio/en-US/home?forum=vssetup
If not, please feel free to let me know.
>>VS2015 updates. - do these happen automatically.
The latest VS2015 is the VS2015 ctp 6, the RTM version is not released now. As far as I know, no update package for the VS2015 ctp 6 now.
>>How easy is to migrate to another VM and take settings with me.
The "settings" means the VS settings, am I right? As far as I know, we could use the VS settings synchronized on different computers with the same user ID to log on VS from the VS2013 version, so as my understanding, the VS2015 would support
this feature.
Reference:
https://msdn.microsoft.com/en-us/library/dn135229%28v=vs.140%29.aspx?f=255&MSPPError=-2147217396
>>Where to store my code / and project files -  I want to work on stuff in  mapped virtual / clolud  drive from any VM then check in to VSOnline once I am happy.  Are thee any recommendations.
If this issue is related to the VSO, maybe this forum would be better for you:
https://social.msdn.microsoft.com/Forums/vstudio/en-US/home?forum=TFService
>>The RDP connection stability  and timeout. -  the former isn't that great but anyone set the latter:
It seems that it is related to the window, maybe you could share me more information about it.
In addition, if you meet any SharePoint development issue, please post the issue here:
https://social.msdn.microsoft.com/Forums/office/en-US/home?category=sharepoint
If I have misunderstood this issue, please feel free to let me know.
Best Regards,
Jack
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click
HERE to participate the survey.

Similar Messages

  • List of activities for setting up ERP 6.0 with Best Practices

    Based on my understand if i were to plan for setting up an ERP 6.0 Landscape with Best Practices (Full Scope) i would consider the execution of following activities:
    Prepare EHP4 Landscape
    Install ERP 6.0 on DEV
    Upgrade DEV to EHP4 SP06
    Install SAP Best Practices v1.604
    Activate Full Scope of Best Practices on DEV
    Prepare QA (System Copy - DEV with BP Activated)
    Prepare PRD (System Copy - DEV with BP Activated)
    Register Landscape in Solution Manager
    Customization on EHP4
    Customize Best Practices Scenarios on DEV
    Transport Changes to QA
    Test Changes  on QA
    Transport Changes to PRD
    Upgrade to EHP5
    Upgrade DEV, QA and PRD to EHP5
    Install HCM Localization on DEV, QA and PRD
    Customization on EHP5
    Customize HCM Best Practices Scenarios on DEV
    Transport Changes to QA
    Test Changes  on QA
    Transport Changes to PRD
    Please advise if there is anything missing or incorrect.
    Thanks.

    Hi,
            I'm on a project with similar requirements. I follow this order in steps that you describe:
    Install ERP 6.0 on DEV
    Upgrade DEV to EHP4 SP06
    Install SAP Best Practices v1.604 on DEV
    Install QA
    Install SAP Best Practices v1.604 on QUA
    Install PRD
    Install SAP Best Practices v1.604 on PRD
    Activate Full Scope of Best Practices on DEV
    Register Landscape in Solution Manager
    Upgrade DEV, QA and PRD to EHP5
    Install HCM Localization on DEV, QA and PRD
    Customize Best Practices Scenarios on DEV
    Transport Changes to QA
    Test Changes  on QA
    Transport Changes to PRD
    I hope that this will be useful for you
    Best regards.
    Alejandro Cepeda.

  • Time Machine best practices after Lion to Mountain Lion upgrade

    I've made the upgrade from Lion to Mountain Lion and everything seems to be OK.
    I have been using Time Machine for backups since I deployed my first and, so far, only Mac (Mac Mini running Lion) in 2011.  I run my TM backups manually.  Since upgrading to Mountain Lion, I have not yet kicked off a TM backup, so my questions involve best practices with TM after an upgrade from Lion to Mountain Lion:
    Can I simply use the same drive as I use currently, do what I've always done, start the backup manually, and TM handles gracefully the new backup from the new OS?  
    At this point, since I have only backups to the Lion system, what I see when I doubleclick on the Time Machine drive is a folder called “Backups.backupdb”, then a subfolder called “My Mac mini”, and then all the backup events.   Nothing else.  What will I see once I do a backup now, after the Mountain Lion upgrade?
    If I for some reason needed to boot to my old Lion system (I cloned the startup disk prior to upgrading to ML) and access my old Lion backups with TM, would I be successful?  In other words does the system know that I'm booted to Lion, so give me access to the TM backups created under Lion?   Conversely when booted to the new Mountain Lion system, will I have access only to the backups created since the upgrade to Mountain Lion?
    Any other best practices steps I should take prior to my first ML backup?
    Time Machine is a great straightforward system to use (although I have to say I’ve not (yet) needed to depend on it for recovery...I trust that will go will when needed) but I don't want to make any assumptions as to how it works after a major OS upgrade.
    Thank you for reading.

    1. Correct. If you want to downgrade to OS X Lion, your Mac will still keep backups created with OS X Lion, so just start into Internet Recovery and select one of the backups made with OS X Lion. If you don't want that Time Machine backs up automatically, you may need to use TimeMachineEditor.
    2. After making a backup with Mountain Lion, it will be the same, but with a new folder that belongs to the new backup you have created.
    3. See my first answer.
    4. One advice: when your Time Machine drive gets full, Time Machine deletes old backups, so maybe it can remove all your OS X Lion backups. However, I don't think that you would need to go back to OS X Lion.
    If you have any questions apart from those, see Pondini's website > http://pondini.org

  • Eclipse / Workshop dev/production best practice environment question.

    I'm trying to setup an ODSI development and production environment. After a bit of trial and error and support from the group here (ok, Mike, thanks again) I've been able to connect to Web Service and Relational database sources and such. My Windows 2003 server has 2 GB of RAM. With Admin domain, Managed Server, and Eclipse running I'm in the 2.4GB range. I'd love to move the Eclipse bit off of the server, develop dataspaces there, and publish them to the remote server. When I add the Remote Server in Eclipse and try to add a new data service I get "Dataspace projects cannot be deployed to a remote domain" error message.
    So, is the best practice to run everything locally (admin server, Eclipse/Workshop). Get everything working and then configure the same JDBC (or whatever) connections on the production server and deploy the locally created dataspace to the production box using the Eclipse that's installed on the server? I've read some posts/articles about a scripting capability that can perhaps do the configuration and deployment but I'm really in the baby steps mode and probably need the UI for now.
    Thanks in advance for the advice.

    you'll want 4GB.
    - mike

  • Setting up a new SCOM Enviroment best practice pointers

    Hello
    I currently have SCOM 2007 R2 and am looking to implement SCOM 2012 R2. We have decided to start a fresh and start with a new Management Group (rather than upgrade the existing SCOM 2007 R2 environment, as there is a lot of bed practice, lessons learnt we
    do not want to bring over to the new environment).
    I am looking for some practical advise please on how to avoid pitfalls down the line.
    For example, in the past as was recommend when installing a new MP, I would create a new unsealed MP to store any overrides for the newly imported sealed MP. So lets say I imported a MP called "System XZY" I would then create an unsealed MP called
    "Override SYSTEM XYZ" now on the surface that looks fine, but when you sort in the Console they do not end up next to each other due to the console sorting alphabetically on the first letter.
    Therefore I should have called the MP "System XYZ Override" and the same for the others too, they way they would of all sorted next to one another sealed and its equivalent unsealed MP.
    The above is a very simply example of where doing some thing one way although looks OK at the start would have been much better to do another way.
    Also when it comes to Groups in used to create a group in an unsealed MP (e.g. the override MP) relevant to the rule/monitor application in question. The issue is as you know with unsealed MP you cannot reference the Group from another MP. Therefore
    if I needed to reference the same group of computers again for another reason MP I could not without creating another Group and thereby duplication and more work for the RMS to do populating/maintaining these groups.
    However I have also read that creating and MP for Groups then sealing and unsealing to add more groups etc. can be an issue on its own, not sure they this sealing/unsealing of a MP dedicated to Groups would be an issue, perhaps someone has experience of
    this and can explain?
    I would be very grateful for any advise (URL to document etc.) which best practice tips, to help avoid things such as the above down the line.
    Thanks all in advance
    AAnotherUser

    The following articles are helpful for you to implement SCOM 2012 R2
    System Center Operations Manager 2012: Expand Monitoring with Ease
    http://technet.microsoft.com/en-us/magazine/hh825624.aspx
    Operations Manager 2012 Sizing Helper Tool
    http://blogs.technet.com/b/momteam/archive/2012/04/02/operations-manager-2012-sizing-helper-tool.aspx
    Best practices to use when you configure overrides in System Center Operations Manager
    http://support.microsoft.com/kb/943239/en-us
    Best Practices When Creating Dashboards In OM12
    http://thoughtsonopsmgr.blogspot.hk/2012/09/best-practices-when-creating-dashboards.html
    Roger

  • VM Manager on virtual machine - best practice

    I am setting up a server using Oracle VM to limit the number of cpus available to a virtual machine which will be running an Oracle database.
    I have set up a test environment, one Oracle VM Server, and the Oracle VM Manager running on a separate server.
    I have read in this forum that I could create a linux virtual machine on the VM Server, and then install the Oracle VM Manager on that.
    Re: Oracle VM Manager installation
    To create the linux virtual machine, I used the Oracle VM Manager that I have running on a separate server.
    Once I created the linux virtual machine, I installed Oracle VM Manager on it.
    From this point, I hoped to create the virtual machine upon which I will run the Oracle database, but I get the following message:
    OVM-2008 The Server Pool Master (10.36.64.225) has been registered with some other pool, and can not register it again.
    So if running the Oracle VM Manager on a virtual machine on the Oracle VM Server is a good idea, how do I do it?
    Many thanks.
    Paul

    Hi Tommy,
    Thanks for taking the time to reply to my query.
    Unfortunately, things didn't go so well.
    I ran the commands you listed, checking on the status of each VM manager (the original one, running on separate physical linux server, and the one I want to use, running as a VM guest) between each command:
    /opt/ovs-agent-latest/db/db_del.py master
    - this provided no response, and logout/login from both VM managers showed no changes.
    /opt/ovs-agent-latest/db/db_del.py srv
    10.36.64.225 removed.
    - the oraVMmanager now listed as powered off in the original VM manager.
    /opt/ovs-agent-latest/db/db_del.py srvp
    10.36.64.225 removed.
    - no change in VM Managers
    /opt/ovs-agent-latest/db/db_del.py vm
    /OVS/running_pool/28_oraVMmanager removed.
    - no change to both VM Managers and the guest VM is still running.
    /sbin/service ovs-agent restart
    option 2 - guest VM shutdown (28_oraVMmanager)
    I couldn't get the guest VM to restart
    service xendomains start failed saying there is a lockfile:
    /var/lock/subsys/xendomains
    I haven't been able to get past this point other than to re-install and start again.
    Maybe there is another way to do this...
    If I install the VM Server, can I create a linux VM Guest using command line rather than from the VM Manager?
    Once I have a VM guest, I can logon to that, and install the VM Manager.
    Cheers
    paul

  • Server backups, Time Machine, best practices

    Hey guys, I've got a Mac Mini running Lion Server that is providing VPN, Profile Manager, and some file sharing to macs and i-devices in my household.  I also am using portable home folders for the kids accounts. (and soon I may be doing the same for my account and the wife's account)  Connected to my Mini is a Drobo. 
    Currently I'm using Crashplan to backup data on all of our client machines.  (an iMac, and a couple of macbooks)  However, I would like to add Time Machine to the mix.  Here are my thoughts:
    client machines:
    - use TM to backup to the mac mini server.  (backups stored on drobo)
    - EXCLUDE the 'local' (sync'd) home folders for network users, since their home folders are actually stored on the server. 
    server:
    - use TM locally on the server to backup itself to a separate external disk.
    - this backup should INCLUDE the users home folders since they're not getting backed-up on the client side. 
    - optionally install crashplan on the server and use it to backup users' folders as well??
    The whole goal here is convenience.  I trust Crashplan to backup my important stuff offsite (pictures, videos, etc), but in the event of a disk failure I want an easy, no-hassle way to fully restore a machine - either client or server machine - back to its original state before the disk failure.  The only thing that has me scratching my head a bit is the user folder stuff.  If everyone had local accounts it would be easy - just use TM to backup everything.  But since the home folders (for the network users) are actually stored on the server, with a syncronized version on the client, it complicates things a bit.
    Love to hear anyone's feedback on how to proceed.  Thanks!

    It certainly sounds like a solid plan, you're correct in that it doesn't make sense to backup the Home folders for network users, since those are on the Server, and in fact you might be able to get by without TM by considering the local Macs themselves the backups of the home folder, the odds of losing both are highly unlikely, except by some sort of proximity event, like a burglary or house fire (knock on wood).  In that way, I would actually say cloud storage would be better, but again, I would be satisfied with just the "backups" being the local machines themselves, and you're going that extra mile by backing said Home folders via TM (I'm assuming via a Time Capsule or external hard drive?).
    Since the goal is convenience, you may also consider adding a second drive to your Mac Mini, assuming it doesn't already have one.  Currently, I'm using RAID1 (mirroring) with two hard drives in my Mac Mini, so should a hard drive fail, it will automatically boot from the second hard drive and let you know the first drive failed.  It's certainly cheaper than a Time Capsule, especially if you have the tech prowess to take apart your Mac Mini and install a second hard drive.  I purchased iFixit's kit for adding a second hard drive, and while the $50 cost wasn't ideal, it beat several hundred dollars for a Time Capsule, especially since I don't need Terabytes of storage and I happened to have a spare 500GB HDD laying around... That could be you, too!
    As for the Home folders, again I would go the simple route and just use a RAID1 array in the Mac Mini, that way there's no need to backup the Home folders, they're already automatically backed up to the second hard drive with RAID1.
    As you can tell, I like RAID, but also, if you go the RAID route, they say HDD read speeds increase (there's some conflict on the Internet it seems, but I believe it... I see around a 20MB/s speed boost, not a huge deal, but it's something).

  • Is hard-coded subview sizing brittle or best practice?

    Coming from a web background, where explicit sizing is generally not considered best practice, I am not used to 'hardcoding' positioning values.
    For example, is it perfectly acceptable to create custom UITableViewCells with hardcoded values for subviews' frames and bounds? It seems that when the user rotates the device, I'd need another set of hardcoded frames/bounds? If Apple decides to release a larger TABLET ... then I'd need yet another update to the code - it sounds strange to me in that it doesn't seem to scale well.
    In my specific case, I'd like to use a cell of style UITableViewCellStyleValue2 but I need a UITextField in place of the detailTextLabel. I hate to completely write my own UITableViewCell but it does appear that in most text input examples I've found, folks are creating their own cells and hardcoded sizing values.
    If that is the case, is there any way I can adhere to the predefined positioning and sizing of the aforementioned style's textLabel and detailTextLabel (ie: I'd like to replace or overlay my UITextField in place of the detailTextLabel but yet have all subview positioning stay in tact)? Just after creating a default cell, cell.textLabel.frame is returning 0 so I assume it doesn't get sized until the cell's 'layoutSubviews' gets invoked .. and obviously that is too late for what I need to do.
    Hope this question makes sense. I'm just looking for 'best practice' here.
    Message was edited by: LutherBaker
    Message was edited by: LutherBaker

    I think devs will be surprised at the flexibility in their current apps when/if a table it released. I'm of the opinion that little energy will be needed to move existing apps to a larger screen if at all. Think ratio, not general size.
    In terms of best practice...hold the course and let the hardware wonks worry about cross-over.

  • Best practices when using OEM with Siebel

    Hello,
    I support numerous Oracle databases and have also taken on the task of supporting Enterprise Manager (GRID Control). Currently we have installed the agent (10.2.0.3) on our Oracle Database servers. So most of our targets are host, databases and listeners. Our company is also using Siebel 7.8 which is supported by the Siebel Ops team. The are looking into purchasing the Siebel plugin for OEM. The question I have is there a general guide or best practice to managing the Siebel plugin? I understand that there will be agents installed on each of the Servers that have Siebel Components, but what I have not seen documented is who is responsible for installing them? Does the DBA team need an account on the Siebel servers to do the install or can the Siebel ops team do the install and have permissions set on the agent so that it can communicate with the GRID Control? Also they will want access to the Grid Control to see performance of their Components, how do we limit their access to only see the Siebel targets including what is available under the Siebel Services tab? Any help would be appreciated.
    Thanks.

    There is a Getting Started Guide, which explains about installation
    http://download.oracle.com/docs/cd/B16240_01/doc/em.102/b32394/toc.htm
    -- I presume there are two teams in your organization. viz. DBA team which is responsible for installing the agent and owns the Grid Control / Siebel ops team is responsible for monitoring Siebel deployment.
    Following is my opinion based on the above assumption:
    -- DBA team installs agent as a monitoring user
    -- Siebel ops team provides execute permission to the above user for server manager[]srvrmgr.exe] utilities and read permission to all the files under Siebel installation directory
    -- DBA team provisions a new admin for Siebel ops team and restrict the permissions for this user
    -- Siebel ops team configures the Siebel pack in Grid Control. [Discovery/Configuration etc]
    -- With the above set up Siebel ops team can view only the Siebel specific targets.
    Thanks

  • Client on Server installation best practice

    Hi all,
    I wonder on this subject, searched and found nothing relevant, so I ask here :
    Is there any best practice/state of the art when you have a client application installed on the same machine as the database ?
    I know the client app use the server binaries, but must I avoid it ?
    Should I install a Oracle client home and parameter the client app to use the client libraries ?
    In 11g there is no more changeperm.sh anymore, doest-il prove Oracle agrees to have client apps using server libraries ?
    Precision : I'm on AIX 6 (or 7) + Oracle 11g.
    Client app will be an ETL tool - explaining why it is running on DB machine.

    GReboute wrote:
    EdStevens wrote:
    Given the premise "+*when*+ you have a client application installed on the same machine as the database", I'd say you are already violating "best practice".So I deduce from what you wrote, that you're absolutely against coexisting client app and DB server, which I understand, and usually agree.Then you deduce incorrectly. I'm not saying there can't be a justifiable reason for having the app live on the same box, but as a general rule, it should be avoided. It is generally not considered "best practice".
    But in my case, should I load or extract 100s millions of rows, then GB flow through the network, with possible disconnection issues, although I could have done it locally ?Your potentially extenuating circumstances were not revealed until this architecture was questioned. We can only respond to what we see.
    The answer I'm seeking is a bit more elaborate than "shouldn't do that".
    By the way, CPU or Memory resources shouldn't be an issue, as we are running on a strong P780.

  • Tips and best practices for translating C into LabVIEW? SERIOUS newbie...

    I need to translate a C function into LabVIEW.  This will be my *first* LabVIEW project.  I've been reading some tutorials, and I'm still struggling to get my brain out of "C/C++ mode" and learn the LabVIEW paradigms.
    Structurally, the function that I need to translate gets called from a while-loop and performs a bunch of mathematical calculations. 
    The basic layout is something like this (this obviously isn't the actual code, it just illustrates the general flow control and techniques that it uses).
    struct Params
    // About 20 int and float parameters
    int CalculateMetrics(Params *pParams,
    float input1, float input2 [etc])
    int errorCode = 0;
    float metric1;
    float metric2;
    float metric3;
    // Do some math like:
    metric1 = input1 * (pParams->someParam - 5);
    metric2 = metric1 + (input2 / pParams->someOtherParam);
    // Tons more simple math
    // A couple for-loops
    if (metric1 < metric2)
    // manipulate metric1 somehow
    else
    // set some kind of error code
    errorCode = ...;
    if (!errorCode)
    metric3 = metric1 + pow(metric2, 3);
    // More math...
    // etc...
      // update some external global metrics variables  
    return errorCode;
    I'm still too green to understand whether or not a function like this can translate cleanly from C to LabVIEW, or whether the LabVIEW version will have significant structural differences. 
    Are there any general tips or "best practices" for this kind of task?
    Here are some more specific questions:
    Most of the LabVIEW examples that I've seen (at least at the beginner level) seem to heavily rely on using the front panel controls  to provide inputs to functions.  How do I build a VI where the input arguments(input1, input2, etc) come as numbers, and aren't tied to dials or buttons on the front panel?
    The structure of the C function seems to rely heavily on the use of stack variables like metric1 and metric2 in order to perform calculations.  It seems like creating temporary "stack" variables in LabVIEW is possible, but frowned upon.  Is it possible to keep this general structure in the LabVIEW VI without making the code a mess?
    Thanks guys!

    There's already a couple of good answers, but to add to #1:
    You're clearly looking for a typical C-function. Any VI that doesn't require front panel opening (user interaction) can be such a function.
    If the front panel is never opened the controls are merely used to send data to the VI, much like (identical to) the declaration of a C-function. The indicators can/will be return values.
    Which controls and indicators are used to sending data in and out of a VI is almost too easy; Click the icon of the front panel (top right) and show connector, click which control/indicator goes where. Done. That's your functions declaration.
    Basically one function is one VI, although you might want to split it even further, dont create 3k*3k pixel diagrams.
    Depending on the amount of calculations done in your If-Thens they might be sub vi's of their own.
    /Y
    LabVIEW 8.2 - 2014
    "Only dead fish swim downstream" - "My life for Kudos!" - "Dumb people repeat old mistakes - smart ones create new ones."
    G# - Free award winning reference based OOP for LV

  • VPN3020 - ACS - Windows AD - best practices links

    Do you have good link with general procedures and best practices for setting up VPN user authorization to a standard Windows domain/AD.
    VPN3020 -> radius -> ACS (with default policy to Windows NT) does work, but wanted more granular control which user have VPN access.
    With this model everyone who has Windows account would automatically get VPN access.
    Also if there are any good reading on setting up "single logon" Cisco VPN client and windows domain.

    Try this link
    http://www.cisco.com/univercd/cc/td/doc/product/vpn/vpn3000/4_0/404acn3k.htm

  • What's 'best-practice' with external hard drives?

    Hello folks,
    I just got myself a 500GB LaCie d2 'Quadra' hard drive, and it works great - just as I was led to expect. Now I've connected it to my iMac with a FW400 cable. I've a few questions regarding general usage and 'best practice' when using an external hard drive like this:
    1. Do I need to disconnect it (pull out the cable from my iMac) every time I Shutdown - and reconnect on Startup? Or can I leave it it and pretty much just forget about it?
    2. Can I turn it 'on' and 'off' any number of times (using the on/off switch on the back) when working on the iMac? I might like to switch it off if I'm not using it for an extended period of time while still working on the computer. Is this okay?
    3. When I'm not using the drive and the drive switch is 'off', can the drive still remain connected to 'mains' power? Or is it necessary to disconnect it from the 'mains' entirely?
    4. I understand it's best to disconnect when 'Repairing Permissions?' Can this be confirmed?
    Thanks so much.
    Cheers!
    Steve.

    1. Do I need to disconnect it (pull out the cable from my iMac) every time I Shutdown - and reconnect on Startup? Or can I leave it it and pretty much just forget about it?
    What I do is shut down my Mac, leaving it connected to the mains: the external HD, external speakers and other peripherals are all connected to a mains switch and I turn these off. There's no need to disconnect the cable: some disks spin down when the computer is shut down, some don't. It probably wouldn't hurt to leave it spinning anyway, though I prefer to shut it off at the mains. Incidentally I shouldn't disconnect the computer from the mains when you shut down: doing this will run down the PRAM battery and hasten the day it needs replacing, which is expensive.
    2. Can I turn it 'on' and 'off' any number of times (using the on/off switch on the back) when working on the iMac? I might like to switch it off if I'm not using it for an extended period of time while still working on the computer. Is this okay?
    I shouldn't do this: the most strain on a hard disk is when it is starting up, not when it is running: I should leave it running all the time the Mac is on. If you do switch it off, make sure to unmount it first (drag it to the trash) otherwise you will have all sorts of problems.
    3. When I'm not using the drive and the drive switch is 'off', can the drive still remain connected to 'mains' power? Or is it necessary to disconnect it from the 'mains' entirely?
    No: I see no problem in leaving it plugged in to the mains: the 'off' switch disconnects it anyway.
    4. I understand it's best to disconnect when 'Repairing Permissions?' Can this be confirmed?
    I've never heard this, and I can't see that there's any neccesity: the repairing process will be confined to the disk you have nominated to work on in any case.

  • Best practice for Documenting SOA Composites

    Hi,
    I am looking for any general guideline or best practice to create documentation for composites developed as part of a project.
    Are there any plugins which help export in Visio or other tool?
    I don't see a create JPEG button on composite editor similar to BPEL, so any suggestions for documentating that?
    In general, i would like to take your opinions/suggestions to adapt a process for a better documentation.
    Thanks.

    Hi,
    As such there is no such guidelines or best practices which are followed for documentation.
    You may plug in your source control system with Jdeveloper but it will help during the coding process.
    We have used OER in our project for maintaining documentations and the relationships among different files (be it xsd's, wsdls, bpels, mediator , etc)
    Thanks

  • [Solved] Keyboard layout - best practice

    I have two laptops running Arch Linux. On the first I have set norwegian keyboard with:
    Option "XkbLayout" "no"
    in then keyboard section of /etc/X11/xorg.conf.d/10-evdev.conf.
    On the second I have usede
    localectl set-x11-keymap no
    from CLI to set the keymap.
    What ist best practice?
    Last edited by torors (2014-07-30 14:53:22)

    localectl creates /etc/X11/xorg.conf.d/00-keyboard.conf
    $ cat /etc/X11/xorg.conf.d/00-keyboard.conf
    # Read and parsed by systemd-localed. It's probably wise not to edit this file
    # manually too freely.
    Section "InputClass"
    Identifier "system-keyboard"
    MatchIsKeyboard "on"
    Option "XkbLayout" "pl"
    EndSection
    Another computer:
    $ cat /etc/X11/xorg.conf.d/00-keyboard.conf
    # Read and parsed by systemd-localed. It's probably wise not to edit this file
    # manually too freely.
    Section "InputClass"
    Identifier "system-keyboard"
    MatchIsKeyboard "on"
    Option "XkbLayout" "pl"
    Option "XkbModel" "pc105"
    Option "XkbOptions" "terminate:ctrl_alt_bksp"
    EndSection
    Neither of hem has /etc/X11/xorg.conf.d/10-evdev.conf, but they have /usr/share/X11/xorg.conf.d/10-evdev.conf - xorg-server 1.16 style.
    Last edited by karol (2014-07-30 12:54:24)

Maybe you are looking for