Migration Questions! (fstab, bootloader,networking)

Background
My current laptop is a Toshiba R700, and it's on its way out unfortunately. I've been using Arch on it for a while, but I still feel like this is a beginner question.
I am planning to just swap the hard drives since I cheaped out and ordered one without a drive. I think I have everything in order, but I have never done this before so I want to run my plan by you guys, and I have some questions along the way.
I am replacing it with a Clevo W230SS.
R700:
-Intel graphics
-Intel wireless
Clevo:
-Nvidia+Intel graphics
-Bigfoot/Killer Wireless (supported in ath9k)
I have read https://wiki.archlinux.org/index.php/mi … w_hardware but it's a little unclear.
fstab
After converting to UUID, my current fstab is:
UUID=f5091d78-6070-4152-8610-2f8b07ca1700 / ext4 rw,relatime,discard 0 1
UUID=2057a94a-a8aa-4df5-ad51-cef18bb1b0ef none swap defaults,discard 0 0
So that part should be fine?
Bootloader
Next, I need to configure the bootloader. I can disable UEFI on my new system. My current bootloader is syslinux. Since my fstab is now UIID, does that mean I don't need to make any bootloader changes?
Regenerate kernel image
I have no idea how I would do this without booting first. Does this mean I can't just boot my old kernel image?
Graphics Drivers
Since both systems have intel graphics, is it okay to leave this alone for now, and set up bumblebee and nvidia drivers later?
Network
I am currently using an intel wireless card, but I will switch to Bigfoot Killer Wireless-N, which worried me at first, but supposedly it's covered under the ath9k driver.
My concern is I don't see ath9k in the /usr/lib/firmware directory, I only see ath6k. Am I missing a package?
http://wireless.kernel.org/en/users/Drivers/ath9k says:
To enable ath9k, you must first enable mac80211 through make menuconfig when compiling your kernel. If you do not know what this means then please learn to compile kernels or rely on your Linux distribution's kernel.
But then, I am able to modprobe ath9k without any issue, so does that mean udev will be able to find it? Or do I need to add it manually to /etc/modules-load.d/? I think it loads because when I run lsmod after loading it I get:
ath9k 94641 0
ath9k_common 1906 1 ath9k
ath9k_hw 396294 2 ath9k_common,ath9k
ath 19419 3 ath9k_common,ath9k,ath9k_hw
mac80211 510355 2 ath9k,iwldvm
cfg80211 459335 5 ath,iwlwifi,ath9k,mac80211,iwldvm
led_class 3611 4 ath9k,toshiba_acpi,sdhci,iwldvm
Is this all?
Thank you everyone!

Well, I would suggest you just try it.  If it does not work, you can go in through a Chroot environment and fix it.

Similar Messages

  • Migrate switches from different networks to a single network

    Hello, I have a question on migrating C3750 switches between networks.
    Our current situation is that we have two networks.
    We control the cores on both networks but do not controll all the edge devices on both networks.
    We can not access the IOS and configuration on a number of switches because we don't have the login information for those.
    In the future if we are to take control of those edge devices how can we modify the IOS and configuration settings for multiple switches at one time?
    Within the core we can modify vlans, destroy and recreate to suit our purposes, so that's easy enough.
    We would rather not have to visit each and every switch and do this manually as there are over 40 switches and only 2 admins.
    Manually doing this requires using the password recovery option method to change the IOS.
    Once done we should be able to update the config via CISCOWorks on all the devices,
    Or we could do it the other way around and update the config and then update the IOS via CISCOWorks.
    Our wish is to attempt to do both via CISCOWorks or some other automated method.
    ej

    Hi,
    Can this be acheived by normal TMS configuration (without Extended Transport Control)? If yes how?
    Yes.
    Understand the following basic relationship behind transporting chagnes using TMS
    ....Repository Objects ->are assigned to-> Package ->are assigned to->Single Transport Layer ->is assigned to-> Single Transport Route -> Target System.
    The basic concept behind standard Transport Control is that every repository object in Source System (e.g Dev-System) is assigned (via a package) to a certain transport layer, it follows that a change request can only have one target system. So, you can transport change requests from multiple clients of Development System, which you can import into QUA system's clients upon release.
    With the standard Transport control program, it is not possible to create two transport routes with the same transport layer from one SAP System. Thus, The extended Transport Control program is used for transporting the changes based on Client specific Transport Routes by using the concept of Transport Group. Using this, one can transport changes to multiple systems, multiple clients using the same transport layer.
    Regards
    Bhavik G. Shroff

  • Analysis Authorization Migration Question

    Analysis Authorization Migration Question
    This is detail Question
    1)     I am testing Analysis Authorization Migration in NW2004s SP9 and have applied all OSS notes that are relevant to SP09 and are coming in SP10.
    2)     We have 2 Info object flagged as Authorization relevant 0COMP_CODE and 0COSTCENTER
    3)     We have Object level security set-up in BW 3.x system and for a role we have specified values like 0COMP_CODE has value 1000, 1800. “:”. In the same role we have specified 0COSTCENTER value 130001 to 180001, “:”  and hierarchy node.
    4)     When we migrate to Analysis Authorizations, using RSEC_MIGRATION, this program creates 2 Authorizations ZCOCODE00 & ZCOSTCTRH00. Both of them have 0COMP_CODE and 0COST_CENTER Objects.
    5)     ZCOCODE00 authorization gets value 0COMP_CODE values 1000, 1800. “:” and 0COSTCENTER Value “:”.
    6)     On the same line ZCOSTCTRH00 gets value 130001 to 180001, “:”  and 0COMP_CODE “:”.
    1st Question:
    1)     Why does it create 2 Authorizations?
    2)     During Checking it does not pass the authorizations, because it seems to me that it fails in Optimization process.
    3)     I manually merge the authorizations in “ONE” object then authorization check passes.  In other word if I combine ZCOSTCTRH00 & ZCOCODE00 then Query authorization check passes.
    Any one is struggling on this.
    Please note, I am doing Migration so that it updates existing Profiles (Roles now from SP9).
    Any comments will be very help full.
    Pankaj Gupta

    Hello Pankaj
    There are some basic misunderstandings on your side.
    Let me try to clarify:
    First we should distinguish between migration of authorizations and of what a query does with them.
    You had 2 auth objects before migration (in 3.x).
    Of course, they must be migrated to 2 new analysis auths.
    There is no general possibility to combine authorizations to a single one as the may appear in different roles and users. Moreover this would kill performance and finally, nobody would recognize the origin.
    Only in very restricted cases one could think of a combination of auths which come out of migration. But, then people loose overview about what goes on.
    Before the corrections in note "Migration IV" the : had not been inserted but now it is for good reasons.
    Now, accept for the moment that you receive 2 auths.
    Then, you cannnot (must not) combine the 2 resulting authorizations!
    <b>Authorization 1</b>
    COMP_CODE : 1000, 1300, “:”
    Cost Center : “:”
    <b>Authorizations 2</b>
    Comp_Code “:”
    Cost Center : 3100001-31999999; “:” plus a Hierarchy Node.
    This means that e.g. combination
    COMP_CODE 1000
    COST_CENTER 3100001-31999999
    <u>is not allowed!!!</u> Therefore, they must not be combined!
    Also, the query and its optimization is comepletely independent of the migration. And here, during query run time the auths cannot be combined. It is no failure!
    Moreover, the merging optimization is just a performance optimizaiton and has nothing to do with whether the query result is authorized or not.
    If you combine them manually you have authorized different combinations.
    Well, now you may wonder why you get 2 auths at all which leads to a "no auth" result in the query execution.
    The reason is, that in 3.x where you got a result with your 2 auth objects the modeling was wrong.
    If you want to authorize any combination of characteristic values, you should combine these characteritics together in one auth object, not in 2!
    (In BI7.0 it works like that but not in 3.x)
    But you defined 2 which may be valid even in several other InfoProviders independently and not even at the same time. Moreover, the auth objects may come from different roles and may be assigend to different users which then have completely different auth content. In general it is not possible to combine different auth objects or to find out those special situations which nevertheless allow for such optimizations. If you re-do a migration with more objects and users you could even receive different results which is also not satisfying.
    Therefore, instead, the mechanism was introduced to insert a : auth to those characteristics that are auth relevant (and checked now with 7.0) but not in the currently processed auth object.
    In you special case it may have made sense to combine them but not in general. And a migration can only try to work as general as possible.
    For your application you may combine the 2 auths manually if you want to allow also the crossover combinations
    COMP_CODE 1000
    COST_CENTER 3100001-31999999
    Best regards
    Peter John
    BI Development

  • Migrate Roles failes when migrating VMs with legacy network adapters (2008R2 - 2012)

    I'm working on a upgrade of Hyper-V 2008R2 cluster to Hyper-V 2012 cluster. I am using the "migrate roles" feature of failover clustering to migrate the CSV's and VM's. The wizard ask to which switch the VMs need to be connected on the target cluster.
    All VMs with network adapters can be started in the new cluster without any issues. If you look at the XML file of the migrated VMs with normal network adapters, a new XML has been generated in the proper 2012 format. However, all VM's with a legacy
    network adapter fail to start. Also there is no migrated XML file in the VM directory. It is impossible to check or adjust the settings of the migrated VMs with legacy network adapters using the failover clustering console.
    I have reproduced the issue in my lab several times, and it seems like a bug.
    There are several workarounds, but I am looking for a real solution.

    Hi,
    We recommend that you use the legacy network adapter only to perform a network-based installation or when the guest operating system does not support the network adapter.
    If the virtual machine continues to use the legacy network adapter it will not be able to leverage many of the features available in the Hyper-V virtual switch. You may want
    to replace the legacy network adapter after the operating system is installed.
    The related KB:
    Building Your Cloud Infrastructure: Converged Data Center without Dedicated Storage Nodes
    http://technet.microsoft.com/en-us/library/hh831829.aspx
    Configure Networking
    http://technet.microsoft.com/en-us/library/cc770380.aspx
    Hope this helps.
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Question on how networking works with new Macbook Pro

    Hi,
    I have a question about how networking works with my new Macbook Pro. This is my first laptop so am new to mobile computing. I am about to configure it to replace my desktop which uses a wired DSL connection. Once I have my Macbook Pro configured to run off the DSL modem, what happens when I take this computer to a location that has ethernet? Will just plugging in the ethernet cable be enough to get on line? Or is there further configuration work that needs to be done?
    I used a Windows laptop from work and all I had to do was plug it in to a hotel ethernet cable to get on line. However, this laptop was never configured to run on my DSL network so I am not certain if that makes things more complicated.
    Thanks for your help!

    Hi
    Your MacBook Pro should just work when plugged into an ethernet connection.
    Just check something, Open Network Preferences, you can select it from the Airport icon in the menu bar.
    In the left-hand pane click on Ethernet and just check that it is set to configure using DHCP in the right-hand pane, that's it.
    So when you plug in the ethernet cable it will ask the router for an IP address and off you go.
    I assume that any system you plug into will use DHCP rather than static IP addresses. If it is a static IP address, you will need to manually configure the ethernet settings by inputting the IP address and subnet mask info plus DNS servers.
    Usually it is done with DHCP and you don't have to do a thing and likewise your home DSL is DHCP?
    If not then you will have to change your settings, make a 'Hotel' setting which you can use when in hotels and keep your 'Home' set-up for when you are at home. In the pull down menu select Edit Locations to add new ones.
    Phil
    Message was edited by: Philip Tyler

  • Exchange 2013 Migration Questions

    Exchange 2013 SP1 to a new server. I just realized that I have been asking migration questions on the "general" forum. Here's the deal:
    We just want to move to a newer server and reuse the "old" server as a DAG member. We have a production exchange 2013 SP1 server and I found that the DB is not on a RAID'ed drive. So, we purchased a server. I installed 2012 R2 and all windows updates.
    Then installed exchange 2013 SP1. Then got our AS/AV software installed. When I boot the new server up and login to ECP, it sees all the exchange users, DB and info. So, I followed:
    http://technet.microsoft.com/en-us/library/dd876926(v=exchg.150).aspx
    to migrate users and current email to the new DB. However, when users logged in their current email was not there. I also tried to use the ECP migration option and this resulted in the same, users old email was not in their inbox. So I had to perform that
    same process on the old DB to get users old email back. I then tried to use the migration option in the ECP->same issue. So as of now, we are on the old DB and things are working fine.
    According to:
    http://technet.microsoft.com/en-us/library/aa997006(v=exchg.65).aspx
    When moving from server to server, the DB filename must be the same. The non-production server does not have the same DB filename. Could this be the problem? Could someone point me to documentation on migrating from server to server

    So I ran Get-Mailbox -Database “Source Database Name” " -ResultSize Unlimited | New-MoveRequest -TargetDatabase “Target Database Name” and moved all the user mailboxes. Then moved the system mailboxes, public folders and
    OAB. Adjusted send connector to reflect new server. Log in as a user, all email is there. Great! Dismount the old DB. Wait a few minutes and user email is still there. Woohoo! Shut down the old exchange server and a few minutes later, all email is gone. Boo!
    Boot the old server back up and mount the old DB, all email is back in user inbox. How do I use only the new DB in exchange 2013?
    So I just dismounted the old DB and all email stays in users inbox?! So it must be something with the old server that did not get moved over but for the life of me I cant figure out what

  • Iprint migration question

    We had to back out of a migration last week because of issues with edir version on Netware. I've updated that and TSA, but another question just came up. I'm migrating iPrint from a cluster, plus doing transfer ID. If I choose cluster resource on the source server in miggui, I only get the consolidate option. Does that mean I need to first run a consolidation job just for iprint and then go back and run the transfer ID? Or will the consolidation screw up a transfer ID?

    $(UService migrations based on cluster resources and Transfer ID are separated in Migration Tools (miggui). You need to execute Cluster based iPrint migration in one project and Transfer ID migration in another project.
    Perform all service consolidation migrations and make sure they are all completed/synced and working before attempting Transfer ID.
    >>>
    From:
    zenking<[email protected]>
    To:
    novell.support.open-enterprise-server.migration
    Date:
    07/22/2010 02:26 AM
    Subject:
    iprint migration question
    We had to back out of a migration last week because of issues with edir
    version on Netware. I've updated that and TSA, but another question just
    came up. I'm migrating iPrint from a cluster, plus doing transfer ID.
    If I choose cluster resource on the source server in miggui, I only get
    the consolidate option. Does that mean I need to first run a
    consolidation job just for iprint and then go back and run the transfer
    ID? Or will the consolidation screw up a transfer ID?
    zenking
    zenking's Profile: http://forums.novell.com/member.php?userid=2813
    View this thread: http://forums.novell.com/showthread.php?t=416302

  • Hyper-V Server 2012 Migration Questions

    Hello All,
    This is my first post here, but I have used these forums many times for information. Sorry in advance for the long post.
    I have a few questions regarding migration to Hyper-V server 2012 for my production environment. I have done quite a bit of reading, but I have a few direct questions and I would like to get some direct answers.
    My current production environment consists of one PowerEdge 2900 with 2 Xeon X5460 Quad Core 3.16GHz CPUs, 24 GB of RAM and a RAID 10 consisting of 8, 500 GB HDDs for a total of 2TB of storage. I am currently running Server 2008 R2 Enterprise w/ GUI as the
    Hyper-V host OS. I have 4 virtual machines all also running Server 2008 R2 Enterprise. The 4 virtual machines consist of 1 domain controller, 1 Exchange Server with Exchange 2010 Standard, 1 Server running SharePoint 2010 Enterprise and the remaining server
    running IIS with FTP and HTTP.
    The network topology is as follows….
    Hopefully it is clear from my diagram that the Hyper-V host OS is connected to the same physical network as the domain, but is not a joined to the domain. I set it up this way because I had concerns about connectivity and manageability because the domain
    controller is a guest VM. Also, the IIS server is on a completely different physical network independent of the domain.
    What I would like to accomplish is the migration of the above environment to Hyper-V Server 2012 as is. I want to keep my existing guest VMs unchanged and running Server 2008 R2 for now as well as keep the existing network topology intact.
    I have 3 additional servers in a separate test environment that would be able to serve as temporary storage or whatever is needed for the migration process.
    Here are the two main things I would like to accomplish with this migration…
    1. Make the transition from Server 2008 R2 to Hyper-V Server 2012 as a host OS.
    2. Migrate virtual hard disks from .VHD to the new .VHDX format.
    All that being said, I have finally come to my questions regarding this process.
    First and foremost, I would obviously need to back up my current setup in case something goes horribly wrong during the migration. My question regarding the initial backup is would it be better to do a bare metal backup of the Hyper-V host or should I do
    individual backups (bare-metal?) of the Guest VMs from within their operating systems?
    Second, since I plan to use Hyper-V Server 2012, I will have to manage the host OS using the RSAT from a domain joined client running Windows 7 Professional. How much of a pain is it going to be to setup RSAT and manage the non-domain joined host from a
    domain joined client? Is there a better way without using SCVMM or using Server 2012 w/ a GUI as the host OS?
    Third, are there any concerns I should have, precautions I should take or procedures I need to do before, during or after the migration regarding the existing VMs and the new virtualized hardware environment on the same physical host?
    Forth, should I use the trial version of SCVMM 2012 SP1 (or another previous version) to perform the migration? What should I be aware of using SCVMM for the migration and then discontinuing its use after the migration is complete and moving to management
    using the RSAT?
    Fifth, if I don’t use SCVMM for the migration, what is the best procedure for moving the VMs? Should I just copy the VHDs to a temporary storage location, install Hyper-V server 2012, copy the VHDs back, create new VMs and attach the VHDs or should I use
    the export/import process?
    Number six, when is the best time to migrate the VHDs to VHDX format and what would be the best method?
    And finally, do I need to worry about USN rollback with a single domain controller? From my reading, this seems to be a point of disagreement. Some people say it could happen while others say it won’t. Is there any point during the migration process where
    it could occur either during the copying of VHDs or from the switch to VHDX?
    Again, sorry for the long post and thanks for staying with me this far. Any information would be much appreciated

    1) As Jens said below with Windows Server 2012 you can simple copy the configuration files and VHDs from a 2008 R2 server to a 2012 server and import them - they one caveat to this is that any vlan configuration is lost and you have to simply re-create it. 
    Optionally you can also export the virtual machines from Windows Server 2008 R2 and then import them on Windows Server 2012.
    2) Remote management in a workgroup does have some caveat's associated with it - take a look at
    http://blogs.technet.com/b/jhoward/archive/2009/08/07/hvremote-refresh.aspx.  Generally I would recommend joining the Hyper-V management operating system to the domain - not just because of these issues but for a number of other features to work properly
    (see below)
    3) I always recommend validating the hardware and environment after the installation before migrating critical workloads to it - testing networking, backup etc... to make sure they function as expected.  Also ensure that you upgrade the integration
    components in the VM's after the migration.
    4) That is an option - though you might find you like SCVMM
    5) Recreating VM's using existing VHD's has some issues for example the BIOS GUID changes, all of the NIC's are re-plug and played etc... When possible copying the configuration or using export is much better.
    6) The sooner the better - VHDx has a number of significant advantages, you can do the migration using the Hyper-V manager UI (edit disk) or via powershell with Convert-VHD.  Do keep in mind that during the conversation you need 2x the space (for the
    original and the new VHDx).
    7) In the past you could get into trouble if you for example snapshotted an AD virtual machine and than reverted it - taking one offline and then brining it back online was never a problem.  In Windows Server 2012 we addressed this with a feature called
    generation ID's.
    Domain considerations...  A few things to keep in mind regarding the choice to not domain join the Hyper-V server.
    - You can't live migrate virtual machines
    - You can't utilize Hyper-V over SMB
    - Management is more difficult and less secure
    -Taylor Brown -Program Manager, Hyper-V -http://blogs.msdn.com/taylorb

  • Migrating Local Users to Network/Mobile Home Directories

    Hey Everyone!
    A Happy Holiday's to you all! I'm in the midst of building a new system for my new clients. They had nothing but static IP numbers and no actual servers in a 50+ Mac environment. MacBook Pros, G5's and PowerBook G4s up the yang.
    What I'm looking to do is migrate as seamlessly as possible, all of the existing local users to network users and then some of those network users will become mobile accounts. I have Open Directory authenticating properly so...
    Here's my plan:
    1) Finish creating new builds for the MacBook Pro's, the G5s, and the PowerBook G4s.
    2) Create the users in OD and assign them to groups for permissions.
    3) Drag and drop entire home directory from each computer to a shared folder on my OD Server.
    From here I want to run chown, I'm guessing, to change the user:group for the home folders I copied over so that they match the ID's created by OD. I figure when I do that, then I can simply replace the OD created home folders in my server's Users folder with the copied and permission modified home directories from each local user.
    My guess is that would be the fastest way to migrate the users to the network.
    My question is are the terminal commands I need to run on each folder in order to make this as seamless as possible?
    chown -R username:newgroupname /~path to copied local home directory
    Is that syntax right?

    The command is correct!!!
    But my quess is if you use ACL's to set the permissions you won't need to run the command on every folder
    Best Regards

  • Getting ready to migrate, question

    I work in a small parochial school and am not adept at server configurations. We have an Intel Xserve running Snow Leopard Server with wikis only for about 3 years and recently purchased a Mac Mini with Snow Leopard server.
    I have been tasked (during the school year in a live environment of course) to migrate all of the content from the 10.5 server to the 10.6. Once I fire up the new server and assign it the same public IP address as my 10.5 server, I'll disconnect the 10.5 from the network and migrate.
    My question is: if this fails in any way, can I simply plug the 10.5 server back in and everything will be back to normal?

    Hopefully not too late for an answer, but yes, if the new server didn't (for instance) accept any email, or collect any web data, then from the point of view of the old server, it was just powered off for a few hours, and it'll pick up where it left off.
    Server Migration should do this for you in a couple of hours...

  • How do I find a photo file in the Finder + future migration question

    Being a recent Mac convert, I am just getting used to the 'complete control' approach to photo management that iPhoto has. I'm used to knowing where my photos physically reside on my computer - this is useful for doing things like uploading photos to a website for example. How do I actually find a particular photo in the Finder (or even give photo files recognisable names, as opposed to whatever automated naming system iPhoto uses)?
    A followup question to this is, what happens if I ever want to migrate my photos out of iPhoto one day to another application or even back to a PC (God forbid - just my paranoia about being forced to use a particular system forever kicking in here!) - is there a way to do this, and keep photo modifications etc? Or once I start using iPhoto, have I made my decision for life??
    My apologies if this has been asked before, I couldn't find anything in the forum when searching.

    poddster
    I hope you've a notebook and pen, but you've asked a lot of questions:
    I'm used to knowing where my photos physically reside on my computer
    You photos are stored in the iPhoto Library at your Pictures Folder. This is a Unix style Package Folder that very easy to see inside: right click on it and choose show package contents. A finder window opens with the library exposed.
    Here's how the library is laid out:
    In this folder there are various files, which are the Library itself and some ancillary files. Then you have three folders
    Originals are the photos as they were downloaded from your camera or scanner.
    (ii) Modified contains edited pics, shots that you have cropped, rotated or changed in any way.
    iPhoto always preserves the original file, all operations are carried out on a copy.
    (iii) Data holds the thumbnails the the app needs to show you the photos in the iPhoto Window.
    And here's a warning: It is strongly advised that you do not move, change or in anyway alter things in the iPhoto Library Folder as this can cause the application to fail and even lead to data loss.
    this is useful for doing things like uploading photos to a website for example
    No it's not. Don't surf the iPhoto Library. The idea with iPhoto is that you do everything via the iPhoto Window or media browsers:
    So, to access pics use one (or more) of the following:
    There are three ways (at least) to get files from the iPhoto Window.
    1. *Drag and Drop*: Drag a photo from the iPhoto Window to the desktop, there iPhoto will make a full-sized copy of the pic.
    2. *File -> Export*: Select the files in the iPhoto Window and go File -> Export. The dialogue will give you various options, including altering the format, naming the files and changing the size. Again, producing a copy.
    3. *Show File*: Right- (or Control-) Click on a pic and in the resulting dialogue choose 'Show File'. A Finder window will pop open with the file already selected.
    To upload to MySpace or any site that does not have an iPhoto Export Plug-in the recommended way is to Select the Pic in the iPhoto Window and go File -> Export and export the pic to the desktop, then upload from there. After the upload you can trash the pic on the desktop. It's only a copy and your original is safe in iPhoto.
    This is also true for emailing with Web-based services. If you're using Gmail you can use THIS
    If you use Apple's Mail, Entourage, AOL or Eudora you can email from within iPhoto.
    If you use a Cocoa-based Browser such as Safari, you can drag the pics from the iPhoto Window to the Attach window in the browser. Or, if you want to access the files with iPhoto not running, then create a Media Browser using Automator (takes about 10 seconds) or use THIS
    iPhot doesn't name the files, those names are given by your camera. You can add titles in iPhoto and if you use the File -> Export command there's a facility there to name the resulting file (it'll be a copy - remember what I said about all operation being done on a copy...) with the title.
    Do not rename files in iPhoto, That comes under the heading of making changes in the iPhoto Library Folder and alters the path to the files. If you do, iPhoto will lose track of the file. But the truth is there's simply no need to. However, you can rename the files before importing them if you like.
    Migrating is really easy. You've seen the layout of the Library above. The originals are all there, in the originals folder, the Modified versions in their folder. If you want to have only the most recent versions of pics, then export them from iPhoto to a folder on the desktop.
    By all means post back if you need more.
    Regards
    TD

  • Migration questions from Exchange 2007 to Exchange 2013

    Dear Forum Members,
    I'd ask just two short questions, regarding a migration from Small Business Server 2008 (Exchange 2007) to 2013. We installed the two Exchange 2013 servers, configured a DAG and updated every single URL (OWA, ECP, AnyWhere, Autodiscover etc.) to be a mail.domain.com
    record (DNS round robin, since no HW load balancer :( )
    Thankfully, the mail flow between the Internet and the other Exchange 2007 users are still working. Now for those users I've already migrated, if I check outlook connections there are several connections for GUID based servers via the DNS Round Robin
    name proxy (the AnyWhere address). But I saw that there are still just one connection (type: Exchange Public Folders) to the old 2007 server. Is it okay? I'm a bit afraid to uninstall it because of this.
    And the other thing: Based on what I wrote, do you think I've done it good? Or could I miss any important things? You are much experienced than me in these migrations so I hope that I can get some confirmation/advice here :(
    ps: Is it good if I set NTLM authentication for Outlook AnyWhere?
    Thank you really much for your help,
    Best Regards,
    Chris

    Hi Chris,
    Agree with Hinte, the user will still connect to the exchange 2007 server if there is a public folder database in the old server.
    If the old public folders are no longer in use, you can delete the public folder database and create a new one in the exchange 2013 server, you can also consider migrating public folders to Exchange 2013 .
    The following articles for your reference:
    Use serial migration to migrate public folders to Exchange 2013 from previous versions
    Set up public folders in a new organization
    Step-by-Step Exchange 2007 to 2013 Migration
    >>Is it good if I set NTLM authentication for Outlook AnyWhere?
    The Outlook Anywhere authentication method you choose will depend on a few factors in your environment,
    I recommend you refer to the following thread to understand how to choose:
    https://social.technet.microsoft.com/Forums/exchange/en-US/75f8d6c4-70f4-49e5-ac32-a49dd91b5520/outlook-anywhere-ntlm-for-internal-users-and-basic-for-external-users?forum=exchangesvrclients
    Exchange 2013: Configuring Outlook anywhere
    Best regards,
    Please remember to mark the replies as answers if they help, and unmark the answers if they provide no help. If you have feedback for TechNet Support, contact [email protected]
    Niko Cheng
    TechNet Community Support

  • Open Directory Migration Question

    Setup:
    My company has two servers, both running 10.5.6. We are migrating from the server Fubar (xserve) as it has had a lot of problems and we want to do a fresh install on it (I was not the admin who initially set it up).
    In order to get a 'fresh' OD going, we are recreating all the accounts on the new server Edoras (powerpc mac pro), making sure to preserve UID of the users.
    Problem:
    User A cannot change his password on Edoras after Directory Utility has been changed to point at it. He can change his password locally, but it does not propagate to Edoras, nor does a password change on Edoras affect his local machine.
    The questions I haven't been able to get answers for are:
    * Should the OD search string be different on Fubar and Edoras? Currently our search string is 'dc=fubar,dc=domain,dc=com'.
    * Are there other attributes that have to be setup in OD besides UID? I noticed when using the Target tab in Workgroup Manager that there is a GeneratedUID attribute, does this need to match?
    Thanks for any information/help.

    I did something like this recently. Unfortunately I couldn't get an answer on the Internet and had to re-configure Directory Access on the client machines manually.
    I moved our system from a POwerMac G4 with several upgrades (eSATA card, eSATA Coolgear Enclosure, 7200.11 (yeah I know, bad drives to use) Seagate drives, 1.8 GHz PPC 7447 upgrade, 1.5GB of ram) to a new Mac Pro with a Highpoint RAID controller. The old G4 was very unreliable and couldn't hand
    I had to go to each machine with ARD, open Directory Access, delete the LDAP entry and re-enter it. This was really annoying and confusing for me as the old server and the new server had:
    The same version of OSX (ok, one was a PPC version and I special ordered the Intel version from Apple Tech Support), but they both were running 10.4.11 with the newest security patches.
    The same OD Search Strings
    The same IP Address for the Server
    The same DNS name for the server
    and the same user IDs and group settings
    and I still had to re-do Directory Access using the client machines. Before re-doing the Directory Access re-binding I would try to login. The "other" icon would appear on the loging window, but when I would loging with the correct username and password the login windows would "shake it's head" and wouldn't let me login.
    The biggest pain was that portable directories didn't sync correct anymore, so I had to manually backup, then delete the account, then re-bind, then re-create and restore the portable directory on each laptop manually.
    Unfortunately I do not know the unix command to change directory binding to client computers using ARD. If such a command exists it would make things much easier for you. Does anyone know if a command exists?

  • Exchange 2013 Migration in a complicated network environment

    Hello everyone,
    I am conducting an Exchange migration from 2007 to 2013. The client have 4 geographical site and each of those site have an Exchange CAS server on it. The sites
    have also DCs on each of them. 
    There's two types of connection between sites one is MPLS that's 50MBPs and the other is p2p that's 2mbps. Client wants to have only 2 Exchange 2013 Server with
    all the roles collocated on them and DAG between them.
    I already have installed and configured DAG on the servers. One server is located in the Central data center and the other is located on a second geographical
    site that also have a disaster recovery data center.
    Client wants to configure the MAPI network on Exchange 2013 to use the P2P 2mbps connection and use the MPLS 50mbps connection for the replication NIC. 
    The problem is that both networks MPLS and P2P have the same IP subnet, same gateway e.g. (10.1.1.0/16) but a range of those IPs are configured on the router to
    use the MPLS connectivity.
    I have tried to add static route to the replication network to use the gateway but when trying to add a copy of a centralized database to the DAG. the DAG copies
    the connection over the P2P line. 
    Is it possible to configure this with the same subnet or do I need to have the mpls connection on a totally different network and subnet? 
    I would appreciate all your suggestions and I  am very sorry for my terrible explanation because I am my self confused about their network topology. 
    I will prepare a visio diagram of the network, IPs and everything to clear everything out.
    Mohammed JH

    I have solved the issue, it was very simple but due to lack of knowledge about networking it got complicated. the way I have configured the replication networks between both sites was that each replication NIC on each site should have a different subnet
    that configured on the MPLS network with the high speed and bandwidth. they must be different on each site. 
    For each replication NIC a persistent static route should be configured on Exchange server to tell the NIC where to direct traffic exactly. 
    After configuring the subnets on the replication NIC, I added the static routes and the replication started to work flawlessly. 
    This is the static route command that I run on the first machine on the first site, it tells the Replication NIC to direct traffic to the subnet network on the second site through the gateway 10.1.1.1 
    route -p add 10.5.1.0 MASK 255.255.0.0 10.1.1.1 
    On Exchange on the second site, I had to run the same command as well
    route -p add 10.1.1.0 MASK 255.255.0.0 10.5.1.1 
    thanks everyone for the help.
    Mohammed JH

  • Forms system potential migration questions.

    We have a forms and reports based system built in 10g.
    We are considering migrating to 11g forms and reports but long term moving to ADF or APEX.
    We are generally a data entry system run on intranet so my prefernence has always been APEX but management are trying to follow the Oracle roadmap which is pushing ADF.
    We are a small outfit so the solution we choose will probably be the one which offers the quickest development times.
    I have signed up for apex.oracle.com to have a play but seems to be taking forever to get approval!
    A few questions re APEX (apologies if these are really simple)
    1) How is user access controlled. Currently all our users login via one SYSDBA user. Our current system has a logon box which then allows access to the screens they can amend. I assume APEX uses the DB user access rights?
    2) How easy is it to deploy fixes etc? We have upwards of 200 customer sites. We currently distribute fmx an RDF files on a monthly basis.
    3) Can you use key strokes? Many customers like the F Keys to be used as quick keys? i.e. in forms F10 can be save, F3 can be clear etc
    4) Can we use our existing database? Although this is many years old and all the correct constraints may not be in place (ie. a few FK/indexes may be missing - but we could try and tidy up pre migration) would it be possible to use this. We are consdiering rewriting the database but most customers will want to bring through their old data and have custom reports built on these so would struggle to rewrite)
    5) How is inheritance handled? In our forms environment all our items are inherited from one point so if we change that one point it follows through the system
    6) I have seen online demo were by people are creating mini databases from spreadsheets. Can it be setup so the users can create things like this without affecting other areas of the system?
    7) Any pros/Cons when comparing to ADF
    8) Any people any fedback from migrating from Forms
    Any feedback at all is greatfully recieved.
    Im looking forward to the online demo.

    Sure Oracle pushes ADF, because it has cost them $$$$$$ and they want to get that back. Apex is part of the database, and as such you already have a license for it. No need for a Weblogic or OAS license.
    seems to be taking forever to get approval!Went pretty fast for me a few years ago. 1-2 days if I remember correctly.
    1) There are pre-built authentication methods, but you can also create your own method (which most people do).
    I assume APEX uses the DB user access rights?As in most web applications, there is only one user that actually accesses the database. And I would never ever make that a SYSDBA user!
    Other users may log in via username/password that can be stored in Apex tables or your own tables. They can also be actual database users. However, since they do not connect to the database, their roles are irrelevant. You may use these roles to check authorization in your application, though.
    2) Apex exports applications in a sql file, so you can deploy applications via sqlplus (or via the Apex administrator page).
    3) The keys are actually browser keys, so you are very limited. You can use some Javascript, but forget about something like F7-F8 combination. It doesn't matter what technology you use, web is different from client/server (or the Forms Java applet). You have to rethink GUI concepts coming from Forms and going to browser applications.
    4) Yes. But like Forms, good database design makes thinks easier (like using wizards).
    5) There are all kinds of places where you can set defaults, templates etc.
    6) Yes.
    7) Google for discussions on ADF vs Apex.
    8) We use it side by side. No plans to recreate Forms as an Apex application.
    Depending on the size of your Forms app, it may take a long time to re-create it in Apex. There are some Forms migration tools in Apex, but I wouldn't use those. They cannot migrate the Forms code anyway, since a browser doesn't understand pl/sql code. As said, you really have to rethink your GUI when moving to a web app.

Maybe you are looking for

  • InDesign CS5.5 won't open Indd documents. help

    whenever I launch InDesign the beach ball from hell spins and I can't open any documents created with InDesign. Anyone else have this problem?

  • Column level security

    Hi, While changing PROJECT_INACCESSIBLE_COLUMN_AS_NULL to YES in NQSConfig file to implement the column level security, we get an error saying 'A general error has occurred. [nQSError: 46036] Internal Assertion: Condition m_CountFields == static_cast

  • Fetching timestamp to datepicker form element

    With the page creation wizard i create a "Form on a Table" or "Form on a Table with Report" and i have problems with timestamp columns of the table: no form element gets created for them. I add manually a Date Picker element, and it works fine for wr

  • Sharing to FLICKR

    I received Photoshop Elements for Christmas.  I cannot get photos ot share to Flickr.  Facebook was no problem.  Sometimes I get an error saying there is nternet connection problem or the firewall might be blocking access.  This doesn't make sense as

  • Web Gallery shows a dark shawdow on the bottom of each picture

    On each of my web gallery photos there is an anoying shadow. Anyone else experience the same problem? I have my descrption option turned off. Thanks