Questions about Trex migration

Hello experts,
I have some questions about Trex migration.
We have to migrate our Trex instances to another Hardware type (and operating system type).
Is it somehow possible to export the Trex database (indexes etc.) on the source Trex, and then import them on the new target Trex? I ask because i found another thread in here saying that this is only possible if the source and target OS is the same, and if the source and target Trex is the same version, I donu2019t know if this is true though?
Or is it easier to just install a new Trex, and then let it rebuild index etc.? And is this a feasible way to do it?
As you can see i don't know much about Trex, so any hints and good advice is appreciated.
Thanks in advance.
Regards,
Kenneth

Hi Kenneth,
there are different approachs for migartion TREX.
a) Install TREX on a new machine. And export the indexes and afterwards import the indexes. This is possible even you do not have the same TREX version. Because otherwise if you only want to upgrade your revision you just start the sapinst or install.sh at the same machine.
b) Intstall TREX on a new machine and re-index all.
But keep in mind TREX is not BIA. Even perhapts this is the same SAP software  component.
So re-index could take about days.... not only 10 min. This depends on the type of objekts which should be indexed and of the volumne.
As well it is not recommanded in general to switch on delta. For BIA of cause but not in generel for TREX.
Best regards
Frank

Similar Messages

  • Quick question about "after migration"

    I just upgraded this iMac to 10.7.4 from 10.6. Now I need to migrate some things from another iMac running 10.4.11 like all the Mail.
    I've read the migration instructions and think I can do it.
    My question is what happens to the old Mac after the migration - is it wiped clean or is everything still there?
    I am iffy about the possibility of making a mistake in the migration then would Iosing everything on the old computer.
    Thanks in advance,
    AlleyCat

    Nothing happens to the old Mac. Migration is just a specialized form of file copying or restoring from a backup.

  • Few questions about OWB migration 10g---- 11g and UIODs

    I am curretly migrating OWB repository from 10g to 11g.
    Both repositories are on the same database.
    We just have one single Project in our Repository. It is actually(preinstalled) MY_PROJECT renamed into something else. So it has the UOID of the "default" MY_PROJECT but of course, physical and business names are different.
    Because we renamed MY_PROJECT, complications occured when we first tried to do the repository upgrade the recommended way, with repository assistant. During the upgrade processes , the error came that the objects with same UOIDs but different names exist.
    Obviously, MY_PROJECT from the 10g repository collided with the new, preinstalled, MY_PROJECT in the (almost) empty 11g repository/workspace.
    Also, MY_PROJECT in 11g workspace has exactly the same UOID as the one created in 10g repository.
    I was told by Oracle support that this was a bug-but they do not see it as high priority, so we had to do workaround --the migration of the repository on 11g.
    Now my first Question: Was it completely insane to use MY_PROJECT for your actual ongoing Project? We never had any other problem with this constallation.I also noticed in forums that people indeed use MY_PROJECT for their work.
    The second question: Has anybody , ever, seen the same problem, when trying to upgrade to 11g ?
    The migration procedure is as follows:
    -install 11g Workspace with Repository Assistant
    -Export locations and data from 10g repository
    -Import locations and data in 11g repository- thé update option -matching on UOID..so we do not get problem with MY_PROJECT
    -register locations in11g repository
    -deploy all mappings and workflows
    Now, this all works fine..and our new 11g repository runs without problem..
    I am still puzzled by few things :
    New 11g repository is almost empty apart from MY_PROJECT and DEFAULT_CONFIGURATION. Now, MY_PROJECT in 11g has the same UOID as in oracle 10. But DAFAULT_CONFIGURATION in 11g has different UOID from DAFAULT_CONFIGURATION in 10g. It is always the same UIOD for every new 11g installation (I've upgraded repository on many databases).
    Now 3rd question: Is there any particular reason why DEFAULT_CONFIGURATION  has different UOID in 11g and MY_PROJECT hast the same UOID ? Is there any logic behind it -that I fail to grasp ?
    Another thing. I said that I am importing complete Project in the new repository with update option with matching on UIOD. I should get a problem with DAFAULT_CONFIGURATION, I thought, since it is in the full export of the Project and DEFAULT_CONFIGURATION has different UOID than in 11g repository.
    But I did not get the problem at all.Default Configuration was simply skipped during the import - visible from the import log.
    Therefore 4th question : Why didnät OWB try to import DEAFULT_CONFIG? Is it "internal" object and as such cant't be changed ?
    The reason I am so obsessed with UOIDs is that we have automated release procedure (between development, test and production repositores) which is based on comparing UOIDs.
    Therefore a s slight trace of concearn on my side, because the DEAFULT CONFIG now has different UOIDs than before. But on the other side, we just propagate mappings and workflows between repositories - I do not see why the default config should matter here .
    Thank very much in advance for any answers/suggestions/ideas and comments!
    Edited by: Reggy on 27.01.2012 07:12
    Edited by: Reggy on 27.01.2012 07:12

    I am curretly migrating OWB repository from 10g to 11g.
    Both repositories are on the same database.
    We just have one single Project in our Repository. It is actually(preinstalled) MY_PROJECT renamed into something else. So it has the UOID of the "default" MY_PROJECT but of course, physical and business names are different.
    Because we renamed MY_PROJECT, complications occured when we first tried to do the repository upgrade the recommended way, with repository assistant. During the upgrade processes , the error came that the objects with same UOIDs but different names exist.
    Obviously, MY_PROJECT from the 10g repository collided with the new, preinstalled, MY_PROJECT in the (almost) empty 11g repository/workspace.
    Also, MY_PROJECT in 11g workspace has exactly the same UOID as the one created in 10g repository.
    I was told by Oracle support that this was a bug-but they do not see it as high priority, so we had to do workaround --the migration of the repository on 11g.
    Now my first Question: Was it completely insane to use MY_PROJECT for your actual ongoing Project? We never had any other problem with this constallation.I also noticed in forums that people indeed use MY_PROJECT for their work.
    The second question: Has anybody , ever, seen the same problem, when trying to upgrade to 11g ?
    The migration procedure is as follows:
    -install 11g Workspace with Repository Assistant
    -Export locations and data from 10g repository
    -Import locations and data in 11g repository- thé update option -matching on UOID..so we do not get problem with MY_PROJECT
    -register locations in11g repository
    -deploy all mappings and workflows
    Now, this all works fine..and our new 11g repository runs without problem..
    I am still puzzled by few things :
    New 11g repository is almost empty apart from MY_PROJECT and DEFAULT_CONFIGURATION. Now, MY_PROJECT in 11g has the same UOID as in oracle 10. But DAFAULT_CONFIGURATION in 11g has different UOID from DAFAULT_CONFIGURATION in 10g. It is always the same UIOD for every new 11g installation (I've upgraded repository on many databases).
    Now 3rd question: Is there any particular reason why DEFAULT_CONFIGURATION  has different UOID in 11g and MY_PROJECT hast the same UOID ? Is there any logic behind it -that I fail to grasp ?
    Another thing. I said that I am importing complete Project in the new repository with update option with matching on UIOD. I should get a problem with DAFAULT_CONFIGURATION, I thought, since it is in the full export of the Project and DEFAULT_CONFIGURATION has different UOID than in 11g repository.
    But I did not get the problem at all.Default Configuration was simply skipped during the import - visible from the import log.
    Therefore 4th question : Why didnät OWB try to import DEAFULT_CONFIG? Is it "internal" object and as such cant't be changed ?
    The reason I am so obsessed with UOIDs is that we have automated release procedure (between development, test and production repositores) which is based on comparing UOIDs.
    Therefore a s slight trace of concearn on my side, because the DEAFULT CONFIG now has different UOIDs than before. But on the other side, we just propagate mappings and workflows between repositories - I do not see why the default config should matter here .
    Thank very much in advance for any answers/suggestions/ideas and comments!
    Edited by: Reggy on 27.01.2012 07:12
    Edited by: Reggy on 27.01.2012 07:12

  • Question about Groupwise Migration

    Hello everyone,
    Can you please help me with my question. My company is planning to upgrade
    our existing Groupwise 6.5.4 to Groupwise 7. Is there a detailed
    documentation on how to do it? Any help would be very much appreciated.
    Thank you.
    OS Platform: Netware 6.5 SP6

    Second that!
    >>> On 12/8/2007 at 5:57 AM, in message
    <[email protected]>,
    Dave Parkes<[email protected]> wrote:
    > http://www.caledonia.net/gw7upg.html is probably as good as you will get.
    >
    > Cheers Dave
    >

  • Question about data migration from one company to new without transactions

    Dear All,
    My company is using SAP B1 2005B. I would like to create new company which is based on existing one, including all configuration, chart of A/C and BP records and so on. May I know how to do it?
    From Samson

    I tried it and i got the runtime error with "Microsoft Visual C++ Runtime Library"
    When I tried a part of options, that is no problem at all. However, when i select all options, the memory usage is increase and then the error will be returned.
    The server is Win2003 with 4GB memory.
    From Samson
    Edited by: samson leung on Sep 21, 2009 6:40 PM

  • Question: about to purchase ipad air - how do I migrate contacts,bookmarks,and address books from either an iMac or old macbook?

    Question: about to purchase ipad air - how do I migrate contacts,bookmarks,and address books from either an iMac or old macbook?

    Set up iCloud on both devices
    How to set up iCloud on all your devices
    http://www.apple.com/icloud/setup/

  • Question about Kurts comments discussing the seperation of AIA & CDP - Test Lab Guide: Deploying an AD CS Two-Tier PKI Hierarchy - Kurt L Hudson MSFT

    Question about the sentence in bold. What is the meaning behind this comment?
    How would you separate the role of the AIA and CDP from a CA subordinate server? I can see where I add a CES and CEP server which has those as well, but I don't completely understand his comment. Because in this second step, (http://technet.microsoft.com/en-us/library/tlg-key-based-renewal.aspx)
    he shows how to implement CES and CEP.
    This is from the guide located at: http://technet.microsoft.com/library/hh831348.aspx
    Step 3: Configure APP1 to distribute certificates and CRLs
    In the extensions of the root CA, it was stated that the CRL from the root CA would be available via http://www.contoso.com/pki. Currently, there is not a PKI virtual directory on APP1, so one must be created.
    In a production environment, you would typically separate the issuing CA role from the role of hosting the AIA and CDP.
    However, this lab combines both in order to reduce the number of resources needed to complete the lab.
    Thanks,
    James

    My concern is, they have a 2-3k base of xp systems, over this year they are migrating them to Windows 7. During this time they will also be upgrading hardware for the existing windows 7 machines. The turnover of certificates are going to be high, which
    from what I've read here, it worries me.
    http://blogs.technet.com/b/askds/archive/2009/06/24/implementing-an-ocsp-responder-part-i-introducing-ocsp.aspx
    The application then can go to those locations to download the CRL. There are, however, some potential issues with this scenario. CRLs over time can get rather large
    depending on the number of certificates issued and revoked. If CRLs grow to a large size, and many clients have to download CRLs, this can have a negative impact on network performance. More importantly, by
    default Windows clients will timeout after 15 seconds while trying to download a CRL. Additionally,
    CRLs have information about every currently valid certificate that has been revoked, which is an excessive amount of data given the fact that an application may only need the revocation status for a few certificates. So,
    aside from downloading the CRL, the application or the OS has to parse the CRL and find a match for the serial number of the certificate that has been revoked.
    With the above limitations, which mostly revolve around scalability, it is clear that there are some drawbacks to using CRLs. Hence, the introduction of Online Certificate
    Status Protocol (OCSP). OCSP reduces the overhead associated with CRLs. There are server/client components to OCSP: The OCSP responder, which is the server component, and the OCSP Client. The OCSP Responder accepts status
    requests from OCSP Clients. When the OCSP Responder receives the request from the client it then needs to determine the status of the certificate using the serial number presented by the client. First the OCSP Responder determines if it has any cached responses
    for the same request. If it does, it can then send that response to the client. If there is no cached response, the OCSP Responder then checks to see if it has the CRL issued by the CA cached locally on the OCSP. If it does, it can check the revocation status
    locally, and send a response to the client stating whether the certificate is valid or revoked. The response is signed by the OCSP Signing Certificate that is selected during installation. If the OCSP does not have the CRL cached locally, the OCSP Responder
    can retrieve the CRL from the CDP locations listed in the certificate. The OCSP Responder then can parse the CRL to determine the revocation status, and send the appropriate response to the client.

  • Questions about Mapping GL Accounts to Group Accounts

    Hi,
    I have some questions about mapping gl accounts to group accounts while configuring OBIEE APPS 7.9.6.3 with EBS R12 as a source:
    FIRST QUESTION.-
    For file file_group_acct_codes_ora.csv, I have the following accounts from my customer:
    101101 - Caja Administrativa
    101102 - Fondo Revolvente
    101103 - Caja de Cambios
    101104 - Efectivo en cajero
    This group of accounts is named CASH, now my customer said that this group begins in 101101 and ends in 101199 but in this moment only have this 4 accounts in GL, the rest of the accounts, I mean 101105-101199 are not used right now, they are gonna used in the future.
    So, my question is, in file_group_acct_codes_ora.csv how I need to put this group:
    In this way:
    CHART OF ACCOUNTS ID,FROM ACCT,TO ACCT,GROUP_ACCT_NUM
    50308,101101,101104,CASH
    Or in this way:
    CHART OF ACCOUNTS ID,FROM ACCT,TO ACCT,GROUP_ACCT_NUM
    50308,101101,101199,CASH
    I mean, is there any problem if I use the second way, or is necessary to do it in the first way, and why?
    SECOND QUESTION.-
    For file file_group_acct_names.csv, when I update with a new group of accounts, is there any rule or size boundary for GROUP_ACCOUNT_NAME?
    THIRD QUESTION.-
    For file_group_acct_names.csv, what is the value in column LANGUAGE? I mean, is EBS language?, DB language?, server language?
    I hope that someone can help me, because I need to clarify this and don't do the first full load and this load ends with error because of this.
    Regards,
    Arnulfo

    I'll take some broad swipes at this and let the smarter people come fill in the details.
    We have a true 1:1 setup in our office and have moved to PHDs as a means of protecting against downtime. The thinking is that we will have a spare machine lying around with our base installation ready to go. If a user's machine fails we'll replace it with the spare machine, let it sync the user directory from the server, and we're back in business. It's no substitute for a real backup system, but it potentially avoids having to run a restore from your backups. It also reduces network traffic compared to plain networked homes, and still lets your users work if the server goes down, but provides the benefits of centralized management. John DeTroye wrote a nice article about this.
    If you've already got data on your "client" Mac you will need to move it onto the server. PHDs will download data from the server to the client on the first sync, but will not upload a complete home directory from the client to an empty directory on the server. You'll find some posts in this forum discussing how people have gone about migrating data prior to that first sync.
    WGM allows you to establish exclusions for stuff you don't want to sync.
    One thing to watch out for in the scenario you describe is the so-called "rabbit effect." Assume Bob uses Mac1 as his primary machine. If one day he logs into Mac2 his home directory will be downloaded to Mac2. Once he returns to Mac1 he'll still be cluttering up Mac2 with his data. If he logs into Mac3 the next day and Tom and Sue are also periodically logging into different machines, you can see how you'll end up with a mess pretty quickly.
    Hope this helps.

  • Misc questions about DFS

    Hi, I'm a newbie in DFS and I have several questions about DFS that I hope somebody could answer me:
    1. Introduction/theory/basic stuff:
        I've found this webpage "How DFS Works" at https://technet.microsoft.com/en-us/library/cc782417%28v=ws.10%29.aspx. But looking at its date,
    I'm pretty sure it was written for Windows Server 2003.
       Is there any updated webpage for Windows Server 2008 or later?
    2. My computers are in a domain/AD 2008 R2.  I'm pretty sure one of the servers is DFS root server.  Are we allowed to have more than one DFS root servers in a domain?
    3. Is there a way (GUI or command line) to list out all DFS root servers?
    4. I'm also reading a white paper on "Microsoft File Server Migration Toolkit" that can be downloaded at http://www.microsoft.com/en-us/download/details.aspx?id=17959. 
    In page 16 & 17 of that document, it's written that "DFS consolidation roots" can only be hosted in DFS root server running on one of the followings:
       Windows Server 2003 Enterprise Edition SP2
       Windows Server 2003 Datacenter Edition SP2
       Windows Storage Server 2003 Enterprise Edition
       Windows Server 2008 Datacenter Edition
       Windows Storage Server 2008 Enterprise Edition
       Like most people now in year 2015, I'm not going to use version 2003.  That leaves me two choices.  But I actually have a server "Windows Storage Server 2008 R2
    Standard Edition" sold by Dell.  Do you think it could work like "Windows Storage Server 2008
    Enterprise Edition"?  Otherwise, the only option for me would be to make a "Windows Server 2008 Datacenter Edition"?
    Thanks in advance

    Hi,
    1.Is there any updated webpage for Windows Server 2008 or later?
    https://technet.microsoft.com/en-us/library/cc732863%28v=ws.10%29.aspx?f=255&MSPPError=-2147217396
    2. My computers are in a domain/AD 2008 R2.  I'm pretty sure one of the servers is DFS root server.  Are we allowed to have more than
    one DFS root servers in a domain?
    You could have more than one DFS root server in a domain.
    3. Is there a way (GUI or command line) to list out all DFS root servers?
    We could use the Dfsutil.exe tool to manage the DFS.
    http://blogs.technet.com/b/filecab/archive/2008/06/26/dfs-management-command-line-tools-dfsutil-overview.aspx
    4. Consolidation root just supports run on Windows server 2008 R2 but doesnt support Windows server 2008 R2 as a DFS root.
    Regards.
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Question about saving calendar and contatcs

    Good Morning to all ,
    I have a question about how to transfer the calendar and contatcs files to the new outlook profile after a imap migration. So what is the best way to transfer this kind of data before the imap migration for example to prep 70 machines? Any Powershell or
    vbs script to save the calendar and contatcs into a pst file?
    Thank you in advanced.
    Thiago.

    So what is the best way to transfer this kind of data before the imap migration for example to prep 70 machines? Any Powershell or vbs script to save the calendar and contatcs into a pst file?
    Without knowing Outlook version involved and/or whether there are any additional email accounts configured in any given profile beyond the one being migrated to IMAP - some general comments
    #1 - Simply add the same email account as an IMAP account to the same profile. That will result in the "default data file" remaining the same which contains the default "contact & calendar" folders. There is nothing that limits the
    same email account to be configured with multiple protocols unless there is a setting that also needs to be set at the server level (i.e. GMail)
    #2 - Delete the related "non-Imap" account from the profile - that is has no impact on the data file associated with the account
    At a minimum - Contacts/Calendar folders remain as is with the only caveat being that the email folders in the original PST file will contain data which you (your users) may or may not want to completely get rid of (something you may want to check/confirm
    beforehand)
    Karl Timmermans [Outlook MVP] "Outlook Contact Import/Export/Data Mgmt" http://www.contactgenie.com

  • Question about /usr/local/bin in Mavericks ...

    I just did a clean install of Mavericks on a 2010 MacPro. I understand Mavericks does not replicate this path (/usr/local/bin) when doing a clean install. Prior to this I was running 10.7.xx and I had a few compiled binaries installed in the bin folder. How do I address this? Can I simply recreate the path manually without risk?
    Thanks.
    -paul.

    etresoft The 905
    Re: Question about /usr/local/bin in Mavericks ... 
    Jan 15, 2014 9:30 AM (in response to Paul Figgiani)
    So why would you expect /usr/local/bin to be there? If you install custom, low-level software like this on one system, it will not get migrated to the new system.
    As I stated in my original post I did not expect it to be there. All I'm asking is how do I adreess the issue? Do I recreate it manually without any risk of screwing something up?

  • Questions on SSRS migration options.

    Greetings. I've done some googling and still have some questions on SSRS migrations.
    I need to migrate three windows 2003/ 2005 SSRS instances to a windows 2008R2/ SSRS 2012 instance.
    I was eyeballing the Installation Wizard referenced
    here. Per this doc, this is quite possible:
    Editions and Versions You can upgrade
    SQL Server 2012 Setup provides upgrade support for the following earlier editions of Reporting Services:
    SQL Server 2012.
    SQL Server 2005 Reporting Services SP4………..
    Note that this doc doesn’t mention anything about OS specific versions.
    This doc also mentioned running the Upgrade Advisor, which I was planning on doing. However, before doing this I then stumbled upon
    this doc, which clearly states:
    Prerequisites for installing and running Upgrade Advisor are as follows:
    Windows Vista SP1, or SP2, Windows 7 and Windows Server 2008 R2.
    So the way it looks is that I can do the actual upgrade using the Installation Wizard, but I can’t run the Upgrade Advisor due to the OS being 2003. This seems odd.
    Can someone please confirm/ deny this?
    Thanks in advance! ChrisRDBA

    You could run the 2008 R2 version... 
    https://msdn.microsoft.com/en-us/library/ms144256(v=sql.105).aspx
    The differences between 2008 R2 and 2012 are relatively minor, in fact, they look exactly the same...  I agree with you though, the 2012 should work...

  • Quick (and urgent) Question about Intel G5's

    Just a quick question about the new intel G5's.
    I currently have bunch of software for my PPC G5 which is a Dual 2ghz. Software includes Adobe CS2, Macromedia Studio, Quark 6, etc, etc.
    If I purchase the new intel mac, would I be able to use the same software? or would I be forced to purchase a whole new set of everything I currently have?
    If the software will work on the intel G5, would it perform at the same rate/better than how it performs now on my PPC g5?
    Thanks in advance for any help.

    Rosetta:
    Most of the time you get a real 'hit' when a program first opens that is PPC. Very sluggish. They will require and use more memory than otherwise, too.
    Tests from last August aren't as helpful, there have been improvements, letting the Mac Pro pull even further ahead.
    http://www.barefeats.com/quad06.html
    Comparison Mac Models shows scores of all models. So there is 2x as much or more processing power, bandwidth, better video, as well as disk drives. A 'base' configuration would be 4-6GB RAM.
    And there are differences, more than between G4 and G5.
    People with experience would be Mac Pro Discussions
    Don't use Migration Assistant, and upgrade to CS3 etc. reinstall all your applications fresh.
    There are some drivers and plug-ins, that can be problems, and known.
    Mac Pro 2GHz 4GB 10K Raptor RAID Cinema HD   Mac OS X (10.4.9)   WD RE RAID Aaxeon FW800 PCIe MDD-G4 APC RS1500 Vista

  • Hi, I am about to migrate

    Hi, I am about to migrate from my 2006 intel imac os 10.6.8 to a brand new 15in.MacBook pro retina.I understand that it is better to do this at setup to avoid problems.Question 1: Should I delete old and unwanted  applications on the imac first to avoid them being moved over or will migration assistant handle that.
    Question2: My wifes iphoto 8.1.2 has been a nightmare for ages (we took the imac to the store and an apple 'genius' got her in this state) .She has tried to sort it out before we run duplicate annihilator on it.I don't want to import all the old problems. Her iphoto library(the one we want to keep) has recently appeared on her desktop  .When I click on it iphoto opens and the pics she wants are there ok .But, in her pictures folder there is a Library. iphoto and a Library.6iphoto. When I click on these I get a dialogue:THE IPHOTO LIBRARY NEEDS TO BE UPGRADED ......... I cancelled this in fear of another almighty muddle! My own pictures folder just has the one iphoto library and all is ok. This seems to have happened when I decided to use Time Machine to backup for the first time recently. I have now turned Time Machine off ,reformatted my ext.HD and reverted to using the Lacie backup software
    Thanks,any help much appreciated.
    Artio

    Hi
    we re about to migrate a couple of 6509-E to a couple of new 6513-E
    We have a collapsed redundant core w/ some physical servers, couple of Nexus switches, about 40 access stacks. All is connected redundant to the two 6509-E's
    Our plan would be like this:  
    - First completely pre-configure the new 6513-E's, except for OSPF and BGP. And we shutdown all the SVI's 
    - Label all cables going to the old 6509-E's (we print the new port numbers of the 6513-E to these labels)
    - Plug all these cables out of the first 6509-E
    - Get the first 6509-E out of the rack
    Everything will be still running since everything is still plugged in to the second 6509-E. (But we re running on only 1 coreswitch now, so fingers crossed)  
    - Build the new 6513-E into the rack & power on
    - Build a trunk between the 6509-E and the 6513-E
    - Plug the cables into the new 6513 according to the label info.
    - Shut down the SVI's  (with a script) on the old 6509-E and bring them up at the 6513-E (also with a script) . We will then have a short downtime for connected networks and no Internet or remote OSPF networks for some moments)
    - De-activate BGP and OSPF (old 6509-E) and activate BGP and OSPF on the new 6513-E .
    Now all layer 3 functionality is running on the new 6513-E. We should be up and running again.
    - Now plug out the cables from the second 6509-E and get it out of the rack
    - Build the 6513 into the rack
    - Connect it to the other 6513 with a trunk.
    - Connect cables
    -Etcetera

  • Question about transferring to new mbp

    I am a senior in high school this year and have had my macbook since 8th grade and can't even begin to say how much I love it! I am most likely going to be getting a new mbp in the near future for college and had a few questions about transferring files/applications over. I am pretty good with computers and yes I do realize that there is the migration assistant as well as time machine backups. I have kept time machine backups on an external drive ever since leopard and want to know when I get the new machine if I could just do a time machine restore. (Would this method keep the new ilife on the new mbp because I don't have 09) I also also upgraded the hdd from an 80gb to a 320 gb drive and have partitioned it to run win7 through bootcamp. Would win7 and all the settings automatically be transferred as well. In addition I am using virtual box to run ubuntu within osx. I have also downloaded countless applications online and wanted to know how they would transfer over If I don't have the disk or serial numbers to them. It seems it may be easier just to slap this hdd in the new mbp but I'm not sure if thats possible. Again thanks for the help

    Welcome to Apple Discussions.
    If your old Macbook is a Intel based machine it would be to your benefit to use Migration Assistant to get your new MBP up. With MA assistant you have options of transferring information from your Macbook from either Firewire, USB or Ethernet connections. MA will walk you through how to connect the two machines together and ask what data you want transferred. If the Macbook is Intel based and you choose all of the options MA will make a mirror of the settings, data etc.. onto the new MBP. If your old machine is a PPC machine it is strongly advised not migrate items such as applications to the new machine due to possible incompatibilities. MA will also ask if you want to use a Time Machine drive to migrate the data to the new machine so you do have options. If you choose that route remember Time Machine _does not_ back up Windows so you would not be able to migrate that information using MA.
    Your question has been asked many many times, I would recommend do a thread search and you will find some very useful information.
    Regards,
    Roger

Maybe you are looking for

  • Applet invoke a java script function...possible ? how ?

    Hi, Is it possible for an applet to call a (client side) javascript function, present in its own html page? I know that within an Applet, I can get the context() and then do something like change the string on the browser status bar. What I wish is t

  • Acrobat doesn't print documents fully

    Hi there, I'm having an issue where certain PDF files do not print completely - i.e. 6 pages are sent successfully to the network printer, but only 2-3 pages actually get physically printed. When I switch to the Reader instead, it prints just fine. I

  • Print out problem

    hi! I have a little problem with print out. I want to print out like this: about two cm space between each number. I have 4 columns and 4 rows for(){ System.out.println(a+"           "+b+"          "+ d+"        "+e); }How do I write?

  • Once I have clicked a video the colour of the link no longer changes to let me know I have watched that video; why?

    once I have clicked a video, the colour of the link no longer changes to let me know I have watched that video; why?

  • U0093Auto-draftu0094 Payment Run

    Hi all ! Can please some one throw some light on Auto draft payment run... Basically what I want is , is it possible to clear the open recievebles by the Automatic Payment Run and create the corresponding file automatically transmitted to the bank to