Splitting and migrating LMS

Due to growth of the network, I need to split a single LMS 3.2 server into two seperate machines (master-slave).
I've prepared two new (virtualized) window servers. installed a clean LMS 3.2 server1 (RME, CM) server 2 (DFM, IPM and HUM)
I've restored the previous backup on the new server1 (it says that application doesn't match but I continued restore CS, RME, CM)
I've restored the previous backup on the new server2 (same message, but I continued restore DFM, IPM and HUM)
Everything seems to work fine, except:
(my next step is to setup the master-slave, currently two  standalone-servers)
server1:
compliancy management ->  when I try to open a template, the following error occurs:
Problem with File  /WEB-INF/screens/config/dcma/BaseLine/BaseLineViewer.jsp!!!nul
server2:
It won't let me to run the changehostname.pl perl script, old-hostname doesn't match the old-hostname
These are the only issues that I've seen for now... my question. Is this a supported procedure, if not, what's the best practice for splitting a server into multiple server i.e. a master/slave(s) setup.
thanks in advance for any help.

Problem  with File   /WEB-INF/screens/config/dcma/BaseLine/BaseLineViewer.jsp!!!nul
In the meantime I discovered that for the baseline templates that I have restored, the "Device Type" is missing.
When I add a new one, it works fine. So I can reproduce this error.
Question, how can I restore the "Device Types" in the baseline templates, for some reason this information is lost and causes this error
(I can't edit them via the GUI, so the solution should be editing some file(s) on the file system)

Similar Messages

  • Unicode export:Table-splitting and package splitting

    Hi SAP experts,
    I know there are lot of forums related to this topic, but I have some new questions and hence posting a new thread.
    We are in the process of doing unicode conversion in our landscape(CRM 7.0 system based on NW 7.01) and we are running on AIX 6.1 and DB2 9.5. The database size is around 1.5 TB and hence, we want to go in for optimization for export and import in order to reduce the downtime.As a part of the process, we have tried to do table-splitting and parallel export-import to reduce the downtime.
    However, we are having some doubts whether this table-splitting has actually worked in our scenario,as the export has exeucted for nearly 28 hours.
    The steps followed by us :
    1.) Doing the export preparation using SAPINST
    2.) Doing table splitting preparation, by creating a table input file having entries in the format <tablename>%<No.of splits>.Also, we have used the latest R3ta file and the dbdb6slib.o(belonging to version 7.20 even though our system is on 7.01) using SAPINST.
    3.) Starting with the export using SAPINST.
    some observations and questions:
    1.) After completion of tablesplitting preparation, there were .WHR files that were generated for each of the tables in DATA directory of export location. However, how many .WHR files should be created and on what basis are they created?
    2.) I will take an example of a table PRCD_CLUST(cluster table) in our environment, which we had split. We had 29 *.WHR files that were created for this particular table. The number of splits given for this table was 36 and the table size is around 72 GB.Also, we noticed that the first 28 .WHR files for this table, had lots of records but the last 29th .WHR file, had only 1 record. But we also noticed that, the packages/splits for the 1st 28 splits were created quite fast but the last one,29th one took a long time(serveral hours) to get completed.Also,lots of packages were generated(around 56) of size 1 GB each for this 29th split. Also, there was only one R3load which was running for this 29th split, and was generating packages one by one.
    3.) Also,Our question here is that is there any thumb rule for deciding on the number of splits for a table.Also, during the export, are there any things that need to be specified, while giving the inputs when we use table splitting,in the screen?
    4.) Also, what exactly is the difference between table-splitting and package-splitting? Are they both effective together?
    If you have any questions and or need any clarifications and further inputs, please let me know.
    It would be great, if we could get any insights on this whole procedure, as we know a lot of things are taken care by SAPINST itself in the background, but we just want to be certain that we have done the right thing and this is the way it should work.
    Regards,
    Santosh Bhat

    Hi,
    First of all please ignore my very first response ... i have accidentally posted a response to some other thread...sorry for that 
    Now coming you your questions...
    > 1.) Can package splitting and table-splitting be used together? If yes or no, what exactly is the procedure to be followed. As I observed that, the packages also have entries of the tables that we decided to split. So, does package splitting or table-splitting override the other, and only one of the two can be effective at a time?
    Package splitting and table splitting works together, because both serve a different purpose
    My way of doing is ...
    When i do package split i choose packageLimit 1000 and also split out the tables (which i selected for table split)  into seperate package (one package per table). I do it because that helps me to track those table.
    Once the above is done i follow it up with the R3ta and wheresplitter for those tables.
    Followed by manual migration monitor to do export/import , as mentioned in the previous reply above you need to ensure you sequenced you package properly ... large tables are exported first , use sections in the package list file , etc
    > 2.) If you are well versed with table splitting procedure, could you describe maybe in brief the exact procedure?
    Well i would say run R3ta (it will create multiple select queries) followed by wheresplitter (which will just split each of the select into multiple WHR files)  ...  
    Best would go thought some document on table spliting and let me know if you have specific query. Dont miss the role of hints file.
    > 3.) Also, I have mentioned about the version of R3ta and library file in my original post. Is this likely to be an issue?Also, is there a thumb rule to decide on the no.of splits for a table.
    Rule is use executable of the kernel version supported by your system version. I am not well versed with 7.01 and 7.2 support ... to give you an example i should not use 700 R3ta on 640 system , although it works.
    >1.) After completion of tablesplitting preparation, there were .WHR files that were generated for each of the tables in DATA directory of export location. However, how many .WHR files should be created and on what basis are they created?
    If you ask for 10 split .... you will get 10 splits or in some case 11 also, the reason might be the field it is using to split the table (the where clause). But not 100% sure about it.
    > 2) I will take an example of a table PRCD_CLUST(cluster table) in our environment, which we had split. We had 29 *.WHR files that were created for this particular table. The number of splits given for this table was 36 and the table size is around 72 GB.Also, we noticed that the first 28 .WHR files for this table, had lots of records but the last 29th .WHR file, had only 1 record. But we also noticed that, the packages/splits for the 1st 28 splits were created quite fast but the last one,29th one took a long time(serveral hours) to get completed.Also,lots of packages were generated(around 56) of size 1 GB each for this 29th plit. Also, there was only one R3load which was running for this 29th split, and was generating packages one by one.
    Not sure why you got 29 split when you asked for 36, one reason might be the field (key) used for split didn't have more than 28 unique records. I dont know how is PRCD_CLUST  split , you need to check the hints file for "key". One example can be suppose my table is split using company code, i have 10 company codes so even if i ask for 20 splits i will get only 10 splits (WHR's).
    Yes the 29th file will always have less records, if you open the 29th WHR you will see that it has the "greater than clause". The 1st and the last WHR file has the "less than" and "greater than" clause , kind of a safety which allows you to prepare for the split even before you have downtime has started. This 2 WHR's ensures  that no record gets missed, though you might have prepared your WHR files week before the actual migration.
    > 3) Also,Our question here is that is there any thumb rule for deciding on the number of splits for a table.Also, during the export, are there any things that need to be specified, while giving the inputs when we use table splitting,in the screen?
    Not aware any thumb rule. First iteration you might choose something like 10 for 50 GB , 20 for 100 GB. If any of the tables overshoots the window. They you can give a try by  increase or decrease the number of splits for the table. For me couple of times the total export/import  time have improved by reducing the splits of some tables (i suppose i was oversplitting those tables).
    Regards,
    Neel
    Edited by: Neelabha Banerjee on Nov 30, 2011 11:12 PM

  • StringTokenizer vs. split and empty strings -- some clarification please?

    Hi everybody,
    I posted a question that was sort of similar to this once, asking if it was best to convert any StringTokenizers to calls to split when parsing strings, but this one is a little different. I rarely use split, because if there are consecutive delimiters, it gives empty strings in the array it returns, which I don't want. On the other hand, I know StringTokenizer is slower, but it doesn't give empty strings with consecutive delimiters. I would use split much more often if there was a way to use it and not have to check every array element to make sure it isn't the empty string. I think I may have misunderstood the javadoc to some extent--could anyone explain to me why split causes empty strings and StringTokenizer doesn't?
    Thanks,
    Jezzica85

    Because they are different?
    Tokenizers are designed to return tokens, whereas split is simply splitting the String up into bits. They have different purposes
    and uses to be honest. I believe the results of previous discussions of this have indicated that Tokenizers are slightly (very
    slightly and not really meaningfully) faster and tokenizers do have the option of return delimiters as well which can be useful
    and is a functionality not present in just a straight split.
    However. split and regex in general are newer additions to the Java platform and they do have some advantages. The most
    obvious being that you cannot use a tokenizer to split up values where the delimiter is multiple characters and you can with
    split.
    So in general the advice given to you was good, because split gives you more flexibility down the road. If you don't want
    the empty strings then yes just read them and throw them away.
    Edited by: cotton.m on Mar 6, 2008 7:34 AM
    goddamned stupid forum formatting

  • Difference between upgrdae and migration about oracle database

    Difference between upgrdae and migration about oracle database
    please give the comments

    Well, the question is almost philosophic...<br>
    In 9i, there is a Migration Guide whereas in 10g there is a Upgrade Guide.<br>
    Furthermore, in 9i, there is the command line startup migrate whereas in 10g that's startup upgrade.
    Somebody think upgrade when go to new release, and migration when go to new version.<br>
    Others think upgrade when new version replace database in place, and migration when new version include a move of database.<br>
    Another point of view is : upgrade is for technical, and migration for application/data.<br>
    <br>
    Well, after these explanations, your upgrade/migratation notion will not be more clear, but I think that is not very important, only a terminology game. The most important is to know what you need : new version or new release.<br>
    <br>
    Nicolas.

  • How can I access files that I moved from an older MacBook Pro to a newer one via Firewire and Migration assistant.  The files show up on the new MacBook but cannot be opened.  Thanks!

    How can I access files that I moved from an older MacBook Pro to a newer one via Firewire and Migration assistant?  The files show up on the new MacBook but cannot be opened.  Thanks!

    Get info then check permissions then add your curent user name (it was probably different on old Mac) and give your username full read/write permissions.

  • I just bought a new Mac and migrated my CS6 over.  Now when I open it, I'm told I'm running a trial version of Cloud.  Why can't I use my CS6 version that I purchased?  I don't want to pay for something I've already bought.

    I just bought a new Mac and migrated my CS6 over.  Now when I open it, I'm told I'm running a trial version of Cloud.  Why can't I use my CS6 version that I purchased?  I don't want to pay for something I've already bought.

    Here's how to convert the Trail to Permanent:
    On receiving Trial/Trial Expired screen
    Make sure that you are online
    Click on license this software
    Perpetual product owners: Log in with adobe ID and enter product serial number
    Product should be licensed successfully

  • I have a huge file which is in GB and I want to split the video into clip and export each clip individually. Can you please help me how to split and export the videos to computer? It will be of great help!!

    I have a huge file which is in GB and I want to split the video into clip and export each clip individually. Can you please help me how to split and export the videos to computer? It will be of great help!!

    video
    What version of Premiere Elements do you have and on what computer operating system is it running?
    Please review the following workflow.
    ATR Premiere Elements Troubleshooting: PE11: Project Assets Organization for Scene and Highlight Grabs from Collection o…
    But please also determine if your project goal is supported by
    a. format of your source
    and
    b. computer resources
    More later based on details that you will post.
    ATR

  • What is the best practice for setting up and migrating two notebook users on one desktop

    My wife and I have just purchased a desktop iMac and would like to migrate our 2 notebooks to it.  I plan to use the Migration Application to achieve this.  I am trying to gather information on the best ways to establish a main music library merging ours together, multiple photo libraries keeping ours apart and maintaining unique user log ins.  I'm sure alot of this stuff is may appear to be pretty straight forward setting up multiple users and all, yet we individually have never established multiple users on our personal notebooks, so there will be some bit of learning.  Any advice on the hang ups or tricks users have encountered would be greatly appreciated.  Thank you.

    Step one would be to the steps in Pondini's Setup New Mac guide and migrate your account, apps, and settings from your machine, using the Setup Assistant on first boot. Then, use the Migration Assistant to migrate your wife's account to the machine. That should carry over her photos and music. Merging those libraries should be detailed in the respective help files for iTunes and iPhoto. I don't share anything and only have one account, so I can't help with those details.

  • Corrupted time machine backup and Migration tool

    Thanks in advance to anyone that is able to help. Sorry for the long read.
    Recently I replaced my hard drive because it was slowing down and had a very large number of bad sectors. After replacing the HD with a new one I tried to use migration assistant to move my information from the old one. As has happend with many bad hard drives it could not detect the old drive. This always happens if I try to migrate from a drive with bad sectors. The healthy ones always work.
    So in an attempt to make the migration tool see the files and migrate them I am trying to use the old drive to spoof a Time machine backup.
    First I set my old HD as the location of the new mac's TM backup just to get a TM backup file in place. The fist problem I ran into was that time machine backups are write protected at the kernel level and can't be modified. I found a helper module in the TMsafetynet kext that does this write protection and so, I can delete files in the backup like so.
    sudo /System/Library/Extensions/TMsafetynet.kext/Helpers/bypass rm -Rfv /Volumes/TIMEMACHINEDRIVE/Backups.backupdb/whatever files to delete
    And so, I removed and modified the files in the time machine backup slightly and migration assistant as well as time machine reflected the changes perfectly without error.
    Next I deleted all that was in the time machine backup, I then moved the entire contents of my old drive into this time machine backup file in such a way as to mimic a normal time machine backup.
    The good news is that time machine reads my jimmy rigged backup perfectly (which looks pretty wierd since there is no continuity between past and present), But migration assistant now does not see it.
    If anyone is able to help that would be awesome. I replace bad hard drives all the time that won't copy with migration assistant, transferring the files manually isnt easy or clean. It's a pain.
    Thanks a million for any advice!

    Thanks in advance to anyone that is able to help. Sorry for the long read.
    Recently I replaced my hard drive because it was slowing down and had a very large number of bad sectors. After replacing the HD with a new one I tried to use migration assistant to move my information from the old one. As has happend with many bad hard drives it could not detect the old drive. This always happens if I try to migrate from a drive with bad sectors. The healthy ones always work.
    So in an attempt to make the migration tool see the files and migrate them I am trying to use the old drive to spoof a Time machine backup.
    First I set my old HD as the location of the new mac's TM backup just to get a TM backup file in place. The fist problem I ran into was that time machine backups are write protected at the kernel level and can't be modified. I found a helper module in the TMsafetynet kext that does this write protection and so, I can delete files in the backup like so.
    sudo /System/Library/Extensions/TMsafetynet.kext/Helpers/bypass rm -Rfv /Volumes/TIMEMACHINEDRIVE/Backups.backupdb/whatever files to delete
    And so, I removed and modified the files in the time machine backup slightly and migration assistant as well as time machine reflected the changes perfectly without error.
    Next I deleted all that was in the time machine backup, I then moved the entire contents of my old drive into this time machine backup file in such a way as to mimic a normal time machine backup.
    The good news is that time machine reads my jimmy rigged backup perfectly (which looks pretty wierd since there is no continuity between past and present), But migration assistant now does not see it.
    If anyone is able to help that would be awesome. I replace bad hard drives all the time that won't copy with migration assistant, transferring the files manually isnt easy or clean. It's a pain.
    Thanks a million for any advice!

  • Just reinstalled software and migrated data from Time Machine, now iMac does not recognize printer or camera

    Under the direction of Apple Support, I recently erased my iMac (Intel) hard drive, reinstalled and updated software and migrated data from an external hard drive via Time Machine.
    Now, after clicking on "PRINT", the print dialog box says "printing" then says "no pages found", and finally says "stopped"; and when I plug in my camera to download photos, the camera is not recognized.  Both printer and camera are connected via USB.
    I checked connections, verified printing and iPhoto preferences, turned printer off then on again, unplugged and replugged USB connections, and checked and followed directions in the Help menu.

    Thanks very much, I'd just tried rkaufmann87's suggestion and entered "This solved my question" when your suggestion popped up—you deserve a "This solved my question" too, though I saw only an option for "This helped me".
    Before I got rkaufmann87's suggestion, I repaired disk permissions and that resolved the camera problem.  Both of you nailed the solution.
    Thanks again!

  • What is the difference between upgradation and migration.

    Hi Guru's
    what is the difference between upgradation and migration.
    actuallly i involved in upgradation project, here my role is
    1. first i check the query's in 3.5 save the query and transport the query. and check the query in bex analyzer also.
    2. go to BI .7  find the query;s ,give the query name and save the query ,
    3. once save the query, again will come to 3.5 open the query , it will not open. this is my job here,
        come to 7.0 check the query in analyzer also.
    i am having littile bit confusion, how it will comes query in 7.0, why are u saving the query's in 3.5 and 7.0
    query's already available in 7.0 why are u doing this work?
    can i know the upgrades those  objects, is it neccessary, if necessary how can i upgrade.
    infoobje , transferrules, transferstructure ,infosoure, datasoure,updaterules, ods, cubes.
    Points will be Assingned ,
    Thanks & Regards
    prabhavathi

    Hi,
    I was talking in a general sense not on a query level.
    If your taling about migration in that level meaning as a part of larger upgradation (in your case 3.x to 7) there may be many places where you need to do this kind of activities.
    Fr eg migration into new data flow, Migration of Web templates from BW 3.x to Netweaver 2004s, etc
    Hope this helps.
    Thanks,
    JituK

  • Can we split and fetch the records in Database Adapter

    Hi,
    I designed a Database Adapter to fetch the records from oracle Database. Some time, the Database Adapter need to fetch around 5000, or 10,000 records in single shot. In that case my BPEL process is choking and getting error as
    java.lang.OutOfMemoryError: Java heap space at java.util.Arrays.copyOf(Arrays.java:2882) at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:100)
    Could someone help me to resolve this?
    In Database Adapter can we split and fetch the records, if number of records more then 1000.
    ex. First 100 rec as one set and next 100 as 2nd set like this.
    Thank you.

    You can send the records as batches useing the debatching feature of db adapter. Refer documentation for implementation details.

  • How can I determine what sites are being referenced within Central Admin Upgrade and Migration Manage Databases Upgrade Status?

    When I go to Central Admin > Upgrade and Migration  > Manage Databases Upgrade Status, I have 2 content databases which have the status:
    Database is up to date, but some sites are not completely upgraded.
    How can I determine which sites are not completely upgraded?

    Manage Databases Upgrade Status will provide you all active and offline DB details, you can get same result
    using below PowerShell cmdlet.
    Get-SPDatabase and Get-SPContentDatabase will provide all active database/Content DB in Farm which include Service application db, central admin DB.
    Get-SPDatabase | Format-Table Name, ID
    Coming back to your question, if you find that there are some site are not completely upgraded then run below command and understand the cause if issue on specific DB.
    Test-SPContentDatabase WSS_ContentDB_Name
    If you find any missing file issue in DB then resolve these issue to upgrade content database.
    (verify all customizations referenced within the content database are also installed in the web application. This cmdlet can be issued
    against a content database currently attached to the farm, or a content database that is not connected to the farm )
    Use the Upgrade-SPContentDatabase cmdlet
    to resume a failed database upgrade or begin a build-to-build database upgrade against a SharePoint content database
    Upgrade-SPContentDatabase WSS_Content
    reference:
    http://technet.microsoft.com/en-us/library/ff607813(v=office.15).aspx
    http://technet.microsoft.com/en-us/library/ff607941(v=office.15).aspx
    If my contribution helps you, please click Mark As Answer on that post and
    Vote as Helpful
    Thanks, ShankarSingh(MCP)

  • 1:n Message split and Abap Proxies??

    Hello,
    Can I not use Message split and Abap Proxy together? My scenario is MDM->File ->XI->Proxy->BI.
    I am getting a single file syndicated from MDM and in XI If I use message mapping to do 1:n split in the message mapping, can I use it with Abap Proxies? As per the link below, XI adapter is not present in the list..We are on PI 7.0 SP14. Thank you..
    http://help.sap.com/saphelp_nw04/helpdata/en/42/ed364cf8593eebe10000000a1553f7/frameset.htm
    Thank you for any suggestion..

    Hi Thanujja,
    If you see the message from Raj, I dont think we can split the messages for the proxy. This is beacause the splitting of messages take place at the Adapter Level only for the adapters on the Java stack.
    As suggested by Guru, you can try splitting the messages in the inbound proxy instead of using a BPM, in that way you can acheive good performance.
    Thanks,
    Srini
    Edited by: srinivas kapu on Mar 27, 2008 9:09 AM
    Edited by: srinivas kapu on Mar 27, 2008 9:10 AM

  • I just installed last versión of iphoto and it erased my iPhoto events. Now i dont have iphoto library anymore and i only have my events in iPhone and in MobileMe gallery. How can i download and migrate events in MobileMe gallery to events in iPhoto ?

    I just installed last versión of iphoto and it erased my iPhoto events. Now i dont have iphoto library anymore and i only have my events in iPhone and in MobileMe gallery. How can i download and migrate events in MobileMe gallery to events in iPhoto ?

    O also installed lion before and as i had filevault activated y lost my user information in timemachine. So, i have a useless backup. I only have events in iPhone ( due i didnt sync photos) and in MobileMe gallery, and i already migrate to iCloud. So?

Maybe you are looking for