Cross Empire Data Migration missing characters (utf8?)

Hi. We are ready to migrate over 10000 homedirectorys from our novell/netware plattform to Windows 2008R2.
I use Storage manager 3.0.3.9 with novell client 2 SP2 for windows server 2008R2.
Netware 6.5 SP7 on Source systems.
When i migrate homedirs i lose all in my files and foldernames.
If i do a copy instead with my novell client in windows explorer on the same server i got complete names. So it must be something in the Migration phase?
Any clues?
Rgrds
Stefan

Stefan,
Send an email with a runtime config to storagemanager at novell.com. We will take a look.
Thanks,
NSM Development
>>> Brillko<[email protected]> 4/13/2012 7:36 AM >>>
Hi. We are ready to migrate over 10000 homedirectorys from our
novell/netware plattform to Windows 2008R2.
I use Storage manager 3.0.3.9 with novell client 2 SP2 for windows
server 2008R2.
Netware 6.5 SP7 on Source systems.
When i migrate homedirs i lose all åäö in my files and foldernames.
If i do a copy instead with my novell client in windows explorer on the
same server i got complete names. So it must be something in the
Migration phase?
Any clues?
Rgrds
Stefan
Brillko
Brillko's Profile: http://forums.novell.com/member.php?userid=10163
View this thread: http://forums.novell.com/showthread.php?t=454615

Similar Messages

  • Cross-Empire Data Migration - DST?

    We are still evaluating NSM 3.0.4 Cross-Empire Data Migration....
    It looks like the migration doesn't work with Dynamic Shadow Technology (DST) shadow volumes....
    I thought that seeing as the NSM Engine and/or Agent doing the migration has to have the Novell client installed that it would be able to migrate from both the primary and shadow volumes...
    However, our initial testing shows that it is ignoring the shadow based data...
    Is this an oversight? Use of incorrect/outdated API?
    Has anyone else any experience of this? We have rather a larger amount of data in our shadow volumes....!
    Cheers
    David

    djbrightman,
    It appears that in the past few days you have not received a response to your
    posting. That concerns us, and has triggered this automated reply.
    Has your problem been resolved? If not, you might try one of the following options:
    - Visit http://support.novell.com and search the knowledgebase and/or check all
    the other self support options and support programs available.
    - You could also try posting your message again. Make sure it is posted in the
    correct newsgroup. (http://forums.novell.com)
    Be sure to read the forum FAQ about what to expect in the way of responses:
    http://forums.novell.com/faq.php
    If this is a reply to a duplicate posting, please ignore and accept our apologies
    and rest assured we will issue a stern reprimand to our posting bot.
    Good luck!
    Your Novell Product Support Forums Team
    http://forums.novell.com/

  • Cross-Empire Data Migration - Mirror?

    Evaluating the edir-AD Cross-Empire Data Migration process...
    Is there a copy option that allows the mirroring of source data to the target?
    i.e. the removal of files in the target that no longer exist in the source
    We are looking for a staged migration i.e. allow early migration jobs to copy the bulk of the data in the days/weeks preceding the actual migration, with final migration set to mirror/sync the source and target
    Hope this makes sense
    Thanks
    David

    djbrightman,
    It appears that in the past few days you have not received a response to your
    posting. That concerns us, and has triggered this automated reply.
    Has your problem been resolved? If not, you might try one of the following options:
    - Visit http://support.novell.com and search the knowledgebase and/or check all
    the other self support options and support programs available.
    - You could also try posting your message again. Make sure it is posted in the
    correct newsgroup. (http://forums.novell.com)
    Be sure to read the forum FAQ about what to expect in the way of responses:
    http://forums.novell.com/faq.php
    If this is a reply to a duplicate posting, please ignore and accept our apologies
    and rest assured we will issue a stern reprimand to our posting bot.
    Good luck!
    Your Novell Product Support Forums Team
    http://forums.novell.com/

  • Cross Empire Migration IRFs

    When performing a Folder to Folder migration (eDir to AD) all of my data migrates successfully. My top level folder on the target receives full rights (which were assigned in eDir). However the sub-folders do not have the IRFs that were assigned in eDir. Is this how the migration works in transferring rights or am I doing something wrong.
    Thank you!

    On 03.10.2013 18:26, kbannister wrote:
    >
    > When performing a Folder to Folder migration (eDir to AD) all of my data
    > migrates successfully. My top level folder on the target receives full
    > rights (which were assigned in eDir). However the sub-folders do not
    > have the IRFs that were assigned in eDir. Is this how the migration
    > works in transferring rights or am I doing something wrong.
    It transfers all rights which are _set_ on files/folders on NSS side -
    it does not handle IRFs (those don't exist in NTFS similarly..).
    Sadly the documentation doesn't really mention this!
    Atleast this was the case with v3.0.4 when we did the migrations.
    But, it does give you the list of IRFs that are used on the selected
    data when you look at Preview Report for example, so you know to check
    those up manually after the migrations.
    sk

  • Missing photos in Iphoto Data migration between iMacs

    OMG, Please help!  I got my new iMac yesterday and did a data migration from my old mac.  2 hours later all my files and stuff are on the new machine.
    I just went into iPhoto and it says there are 16,000 pics there, but instead of photos all I have are 16,000 blank boxes.  You click on them as if you were opening a pic and all you get is a triangle with an exclamation mark in the middle.
    Have I lost all my pictures??

    it says iPhoto libary 129.82 GB
    Sounds like the photos are there anyway.
    Option 1
    Back Up and try rebuild the library: hold down the command and option (or alt) keys while launching iPhoto. Use the resulting dialogue to rebuild. Choose to Repair Database. If that doesn't help, then try again, this time using Rebuild Database.
    If that fails:
    Option 2
    Download iPhoto Library Manager and use its rebuild function. (In Library Manager it's the FIle -> Rebuild command)
    This will create an entirely new library. It will then copy (or try to) your photos and all the associated metadata and versions to this new Library, and arrange it as close as it can to what you had in the damaged Library. It does this based on information it finds in the iPhoto sharing mechanism - but that means that things not shared won't be there, so no slideshows, books or calendars, for instance - but it should get all your events, albums and keywords, faces and places back.
    Because this process creates an entirely new library and leaves your old one untouched, it is non-destructive, and if you're not happy with the results you can simply return to your old one. 
    Regards
    TD 

  • Confuse on PR & PO data migration approach new to MM module

    Hi All,
    I'm pretty confuse with the PO data migration approach when it comes to PO is GR or partial GR or GR&IR. I'm hoping that someone can enlighten me. i understand that we typically don't migrate PO when it is GR & IR, FI team usually will bring over to the new system as an vendor open item in AP. How about the PR or PO which have gone through or half release strategy? What is the follow up process?  I have created a criteria table below. How Could someone point me in the right direction? Thanks in advance.
    PR
    Criteria
    Data migration required
    Notes
    Open and Released
    Y
    Open and not Released
    Y
    Flag for Deletion
    N
    Flag for Block
    Y
    PO
    Criteria
    Data migration required
    Notes
    Open and Released
    Y
    Open and not Released
    Y
    GR but no IR
    GR & IR
    N
    AP will bring over as open item
    Flag for Deletion
    N
    Flag for Block
    Y
    Partial
    Y
    For partial GR to recreate PO only with missing GR quantity
    Regards,
    John

    Hi John,
    The approach that i have followed recently is that we have considered PO as the base document and converted other documents based on the PO condition. This means you first need to see if the PO is to be converted or not. Then you can proceed to convert the related documents like PR, Agreement, Info record, Source list, etc.
    Also open qty for PO should be considered for Material and Service line items both.
    Once a GR/SES is created, it gets updated in the PO history table EKBE with its related transaction/event type.i.e. EKBE-VGABE = 1 for GR and 9 for SES. Quantity and value also gets updated in case of material and services. You can compare this consumed quantities with PO quantity.
    Please see below from SCN and let me know if you need more info on PR or PO conversion.
    Purchase Requisition Conversion Strategy
    Thanks,
    Akash

  • Data Migration from CSV file to Database Table.

    Hi,
    I have checked few answered threads on Data Migration Issues but didn't find the solution yet.
    I am reading data from an CSV file to internal table but the complete row is coming under one field of the internal table.
    How i can get values in different fields of the internal table from the CSV file ?
    Thanks & Regards.
    Raman Khurana.

    Hi,
    If you are using GUI_UPLOAD, you might have missed to make has_field_separator  as 'X'.
      EXPORTING
    CALL FUNCTION 'GUI_UPLOAD'
      EXPORTING
       filename                      = 'C:\File.csv'
       filetype                      = 'ASC'
       has_field_separator           = ' X'  "<= Set this as 'X'.
      TABLES
        data_tab                      = itab.
    Regards,
    Manoj Kumar P

  • Best practice for data migration install v1.40 - Error 2732 Directory manag

    Hi
    I'm attempting to install SAP Best Practice for Data migration 1.40 on Win Server 2008 R2 (64 bit).
    Prerequisite error
    Installation program stops with missing file error
    The following file was not found
    ... \migration\InstallationWizard\BusinessObjects Data Services\setup.exe
    The file is necessary for successful installation. Please connect to internet or refer to Quick Guide (available on SAP note 1527151) for information regarding the above file.
    Windows installer log displays
    Error 2732 Directory Manager not initialized
    SAP note 1527151 does not exist or is internal.
    Any help appreciated  on what is the root cause of the error as the file does not exist in that folder in the installation zip file.
    Other prerequisite of .NET 3.5.1 met already.
    Patch is released since 20.11.2011 so I presume that it is a good installation set.
    Thanks,
    Alan

    Hi Alan,
    There are details on data migration v1.4 installations on SAP website and market place. The below link should guide to the right place. It has a power point presentation and other useful links as well.
    http://help.sap.com/saap/sap_bp/DMS_V140/DMS_US/html/index.htm
    Arun

  • InDesign CS4 to PDF - Missing characters

    I created a brochure (2-sided with lots of copy) using InDesign CS4; I exported it to PDF [Compatibility: Acrobat 5 (PDF 1.4)].
    When opened in Acrobat 9 Professional on my computer, the file contents were perfect. The file was sent as an email attachment to prepress. When the printer opened it using Acrobat 7.0 Professional (don't know if it was on PC or Mac; they have both), there were missing characters.
    The word "officer" occurred eight times in the document. In two of those occurrences, the characters "ffi" were missing. The remaining 926 words conveyed without a problem.
    Any insights will be appreciated.
    Frank

    Embedding the entire font shouldn't make any difference whatsoever. InDesign embeds all the glyphs that are used. If the font is embedded at all, Acrobat only uses the embedded font.
    Furthermore, Acrobat (including Acrobat 7) uses the local fonts option if and only if the font being accessed is not at all embedded and that particular font is on the user's system.
    What is very suspect here is that the ffi ligature appears correctly in only 2 out of 8 occurrences in the word officer and that you see no problems on your system using Acrobat 9 Pro.
    Here is what I would look for:
    (1) Ascertain that the font is really embedded at all. Use CTRL-D and look to see whether the font is either embedded or subset embedded. If neither, that is the source of the problem. You could also run a preflight profile to ascertain that all fonts are properly embedded.
    (2) You might want to find out what your printer is
    really doing with your file. Is he opening a page or two in Illustrator, something can can cause very strange PDF corruptions? Are there any steps that he is performing before opening it in Acrobat such as processing it in some prehistoric workflow system that has more errors than an early Mets game?
    (3) Find out whether your printer at least has Acrobat 7 updated with all the updates issued by Adobe. They were a whole carload of them. I think about 12 such updates in the 24 months that Acrobat 7 was current. You might also want to suggest that at this point, Acrobat 7 is a fairly old version for any printer that wants to stay up to date in terms of PDF print publishing workflow.
    - Dov

  • Data migration methods

    Hi experts. I am working on a study for a data migration project where data from multiple legacy systems would be moved into SAP. Apart from LSMW and BDC, what are the other useful methods when the data migration involves huge volumes of upload data?

    Hi,
    Obviously the answer depends to a certain extent on the circumstances of your project (complexity of data, volume, etc) and on the object in question, but here are a few rules of thumb.
    If you are new to this I would stay away from IDOCs and BAPIs as the set up is often a little complicated and if there are errors the problem is often not too easy to identify. And IDOC processing is not always as quick as people assume. For example, the customer master IDOC is processed in the background by a call transaction of XD01.
    For you I would suggest definitely using LSMW, and where possible within LSMW use the standard uploads. There will be situations where the standard upload does not cover everything (again using customers as an example, it will not include the extended address data or the long texts) but I would still use the standard upload to load as much as you can, and then write a couple of simple update BDC recordings (XD02, XK02, etc) to add what's missing.
    Some examples of the standard upload programs are:
    Customers - RFBIDE00
    Vendors - RFBIKE00
    Materials - RMDATIND
    All finance transactions - RFBIBL00
    Fixed assets - RAALTD01
    The standard uploads are normally well documented; they often use BDC (not RMDATIND) so are quite transparent; and they are generally pretty flexible.
    There are a few standard uploads which are not so good, such as the sales order load, and to a lesser extent the purchase order load. In these cases you might need to look at using the BAPIs but get help from someone who has used them before.
    You might want to use recordings for very simple loads like bank master.
    If migrating SAP to SAP, where possible you would look to transfer master data via ALE using IDOCs. But again you need to know what you are doing and it sounds like you are moving from a non-SAP to a SAP system anyway.
    Good luck,
    Ben
    http://www.harlex-ltd.com

  • New GL activated without data migration

    Hi all,
    we have some three projects lined up, one of it is new gl, due to some reasons we have postponed the go live of new gl but other two projects wants to go ahead with their go live, so we provided them an environment where new gl is active but doesnt have migrated data.
    What would be the risks when regression testing is done in such an environment, i have listed out few as below, can something else be expected?
    Cannot perform closing operations, as balances are not known and no previous data is available.
    Clearing and reconciliation functions cannot be performed.
    Cannot use the reporting function as there would be discrepancies in the output
    cannot check the balances
    Transactions referring to the previous data such as bill of exchange, down payments etc cannot be executed.
    if some one has gone thru such situation before, please provide information.
    thanks
    kumar

    Hi Kumar,
    Find below the answer to your question
    Cannot perform closing operations, as balances are not known and no previous data is available.
    - You can perform the closing operation without any problem. Its just that you will not have previos data which will get added to your existing data.
    Clearing and reconciliation functions cannot be performed.
    - You can perform reconcialiation function to the extent you have done the posting, not for the previous data.
    Cannot use the reporting function as there would be discrepancies in the output
    - You can take out reports but it will not give you correct picture as the previous data is missing. please keep in mind the Balance sheet GLs have cumulative balance.
    cannot check the balances
    - You can to the extent which you have done posting
    Transactions referring to the previous data such as bill of exchange, down payments etc cannot be executed..
    - Yes, to do so you need to migrate the previous data.
    Suggestion: Activating New GL without Data migration is an incomplet process, you should do the data migration at the earliest.
    Hope the above statements answer your questions.
    Regards
    Paul

  • RollOut: Data migration phase - material master

    Dear all,
    we are going to implement the SAP in one of our subsidiaries. The phase data migration of the material master is not that clear because I am not very experienced in such a project.
    Can anybody consult me concerning my intentions?
    I would like to subdivide the prozess data migration in the following separate steps:
    1. Analysis of the structure and the material master fields of the legacy system
    First I would require an example structure of the legacy system to get a first overview. This structure should be an excel sheet. Then I would discuss this structure with the staff of our subsidiary.
    2. Structure of our SAP system
    The structure of the SAP and all the necessary fields of the material master will be discussed with the staff of the subsidiary.
    3. Comparing of both structures
    Compare both structures and define which fields are missing and which fields have to be added
    4. Preparation of the migration tool (LSMW)
    I would be very grateful to get a reply or critic are recommendations.
    Did I forget anything?
    Thank you very much
    Conan

    I think you basically got what needs to be done.
    I usually let the module teams deliver the SAP fields that are necessary for their process.
    For the source data I contact the team that deals with the legacy system.
    Then we map fields and find the source fields that have to target, as well as mandatory target fields that have no source data.
    And I request this usually per material type, as each material type may have different requirements.
    Start with the process as early as possible, best with project begin already.
    I am currently doing a huge migration project (merging of several SAP systems) , started last December, and I still miss a lot of mappings.
    I made the exerience that I get mapping for the well kown entries of a field, however, there are many records that have different values, maybe values that should not be used anymore in the legacy system, but old records never go adjusted, hence I have them as error when loading.

  • Legacy Data Migration - question on conversion

    Hello All,
    We are trying to build a generic tool which could help the conversion process in data migration.
    For eg, If the material name in legacy starts with AB (AB6790), the same has to be referred in SAP as SQ6790.
    So its replacing initial few characters during the conversion process.
    Like this, I am expected to collect possible scenarios and
    develop a generic tool, which can cover some possible conversion conditions.
    Can you please help me with such scenarios, which you may have come across in project needs.
    Thank you.
    Meenakshi

    Rahul,
    I strongly suggest you do a search in the CRM General & Framework Forum on this topic.  If you are looking for specific objects to migrate there are several posts on the major data elements in crm (transactions, business partners, products).
    In general the technique is to use the LSMW combined with the XIF-IDOC adapter to convert data into the CRM system.
    My blog here will explain the general process:
    /people/stephen.johannes/blog/2005/08/18/external-data-loads-for-crm-40-using-xif-adapter
    Thank you,
    Stephen

  • Validation rules applied to data migration templates at import

    Hi everyone!
    First post here for me, so please bear with me if I missed something.
    My company has just started the initial implementation of ByDesign. We come from a set of disparate and partially home-grown systems that we outgrew a few years ago.
    As part of this initial phase, we are basically re-creating the data on customers, suppliers, etc. since none of our existing systems makes a good source, unfortunately. We will be using the XML templates provided by ByDesign itself to import the relevant data.
    It has become clear that ByDesign applies validation rules on fields like postal codes (zip codes), states (for some countries), and other fields.
    It would be really helpful if we could get access to the rules that are applied at import time, so that we can format the data correctly in advance, rather than having to play "trial and error" at import time. For example, if you import address data, the first time it finds a postal code in the Netherlands which is formatted as "1234AB", it will tell you that "there needs to a space in the 5th position, because it expects the format to be "1234 AB". At that point, you stop the import, go back to the template to fix all the Dutch postal codes, and try the import again, only to run into the next validation issue.
    We work with a couple of very experienced German consultants to help us implement ByDesign, and I have put this question to them, but they are unaware of a documented set of validation rules for ByDesign. Which is why I ask the question here.
    So just to be very celar on what we are looking for: the data validation/formatting rules that ByDesign enforces at the time the XML data migration templates are imported.
    Any help would be appreciated!
    Best regards,
    Eelco

    Hello Eelco,
    welcome to the SAP ByDesign Community Network!
    The checks performed on postal codes are country specific, and represent pretty much the information that you would find in places like e.g. the "Postal Codes" page in Wikipedia.
    I recommend to start with small files of 50-100 records that are assembled of a representative set of different records, in order to collect the validation rules that need reactions based on your data in an efficient way. Only once you have caught these generic data issues, I would proceed to larger files.
    Personnaly I prefer to capture such generic work items on my list, then fix the small sample file immediately by editing, and do an immediate resimulation of the entire file, so that I can drill deeper and collect more generic issues of my data sample. Only after a while when I have harvested all learnings that were in my sample file, I would then apply the collected learnings to my actual data and create a new file - still not too large, in order to use my time efficiently.
    Best regards
    Michael  

  • Offline data migration fails for BLOB field from MySQL 5.0 to 11g

    I tried to use standalone Data Migration several years ago to move a database from MySQL to Oracle. At that time it was unable to migrate blob fields. I am trying again, hoping this issue might have been fixed in the mean time. That does not appear to be the case. The rows in question have a single BLOB field (it is a binary encoding of a serialized Java object, containing on the order of 1-2K bytes, a mixture of plain text and a small amount of non-ASCII data which is presumably part of the structure of the Java object). The mysqldump appears to correctly store the data, surrounded by the expected <EOFD> and <EORD> separators. The data as imported consists of a small (roughly 1-200) ASCII characters, apparently hex encoded, because if I do a hex dump of the mysqldump I can recognized some of the character pairs that appear in the blob field after import. However, they are apparently flipped within the word or otherwise displaced from each other (although both source and destinations machines are x86 family), and the imported record stops long before all the data is encoded.
    For example, here is a portion of the record as imported:
    ACED0005737200136A6
    and here is a hex dump of the input
    0000000 3633 3838 3037 3c39 4f45 4446 303e 3131
    0000020 3036 3830 3836 453c 464f 3e44 312d 453c
    0000040 464f 3e44 6e49 7473 7469 7475 6f69 446e
    0000060 7461 3c61 4f45 4446 ac3e 00ed 7305 0072
    0000100 6a13 7661 2e61 7475 6c69 482e 7361 7468
    0000120 6261 656c bb13 250f 4a21 b8e4 0003 4602
    0000140 0a00 6f6c 6461 6146 7463 726f 0049 7409
    0000160 7268 7365 6f68 646c 7078 403f 0000 0000
    AC ED appears in the 5th and 6th word of the 4th line, 00 05 in the 6th and 7th words, etc.
    I see explicit references to using hex encoding for MS SQL and other source DB's, but not for mysql.
    I suspect the encoder is hitting some character within the binary data that is aborting the encoding process, because so far the records I've looked at contain the same data (roughly 150 characters) for every record, and when I look at the binary input, it appears to be part of the Java object structure which may repeat for every record.
    Here is the ctl code:
    load data
    infile 'user_data_ext.txt' "str '<EORD>'"
    into table userinfo.user_data_ext
    fields terminated by '<EOFD>'
    trailing nullcols
    internal_id NULLIF internal_id = 'NULL',
    rt_number "DECODE(:rt_number, 'NULL', NULL, NULL, ' ', :rt_number)",
    member_number "DECODE(:member_number, 'NULL', NULL, NULL, ' ', :member_number)",
    object_type "DECODE(:object_type, 'NULL', NULL, NULL, ' ', :object_type)",
    object_data CHAR(2000000) NULLIF object_data = 'NULL'
    )

    It looks like the data is actually being converted correctly. What threw me off was the fact that the mysql client displays the actual blob bytes, while sqlplus automatically converts them to hex for display, but only shows about 2 lines of the hex data. When I check field lengths they are correct.

Maybe you are looking for

  • IPhoto is damaged because of iLife.

    I installed iLife '11, and it said the upload was successful, after i installed it, my iPhoto says, "you can't open the application iPhoto because it  may be damaged or incomplete" and i can't open it at all. i tried pressing that command and option

  • Pavilion p6285l - how to start Safe Mode (F8) ?

    Pavilion p6285l desktop - how to boot in "safe mode" (F8 on most pc's) ? Pushing F8 on startup many times, but no response. No "manual" in the box either. F1 to F12 -- what they do on a HP desktop??? Want to run some virus tests in safe mode, since t

  • Migrating data from Sun ONE directory server into openLDAP

    Hi, I was to migrate the data from Sun ONE directory server into openldap. Has anybody done this or know about this. Can you please share the steps that needs to be done. NOTE: I have exported the data into LDIF file but when I run with ldapadd into

  • Printable version

    Hi All, I want to put a feature of printable version on my webpage can any one of u pls tell me how can this be .. done using jsp .. Thanks in Advance

  • DVD created in iDVD won't work in Mavericks

    My husband and I are creating a DVD in IDVD which we have done successfully on  my macbook . It plays fine on my computer and on dvd players.. But it won't play on my husbands computer which is on Mavericks.it just says supported disk not found. it w