SAP Data Storage Migration from HP EVA SAN to NetApp FAS3070 FMC for M5000s

Good day all
We are needing to perform a storage migration for SAP Data that is currently on 2 HP EVA SANs. We have 2 SUN M5000s, 2 SUN E2900s and a couple of V490s, that all connect to the SAN via Cisco 9506 Directors. We have recently commissioned a NetApp Fabric Metrocluster on 2 FAS 3070s, and need to move our SAP Data from the EVAs to the new Metrocluster. Our SUN boxes are running Solaris 10. It was suggested that we use LVM to move the data, but I have no knowledge when it comes to Solaris.
I have some questions, which I hope someone can assist me in answering:
- Can we perform a live transfer of this data with low risk, using LVM? (Non-disruptive migration of 11Tb)
- Is LVM a wise choice for this task? We have Replicator X too, but have had challenges using it on another Metrocluster.
- I would like to migrate our Sandbox, as a test migration (1.5Tb), and to judge the speed of the data migration. Then move all DEV and QA boxes across, before Production data. There are multiple zones on the hardware mentioned above. Is there no simple way of cloning data from the HP to the NetApp, and then re-synching before going live on the new system?
- Would it be best to have LUNs created with the same volume on the new SAN as the HP EVA sizings, or is it equally simple to create "Best Practise" sized LUNs on the other side before copying data across? Hard to believe it would be equally simple, but we would like to size the LUNs properly.
Please assist, I can get further answers, if there are any questions in this regard.

Good day all
We are needing to perform a storage migration for SAP Data that is currently on 2 HP EVA SANs. We have 2 SUN M5000s, 2 SUN E2900s and a couple of V490s, that all connect to the SAN via Cisco 9506 Directors. We have recently commissioned a NetApp Fabric Metrocluster on 2 FAS 3070s, and need to move our SAP Data from the EVAs to the new Metrocluster. Our SUN boxes are running Solaris 10. It was suggested that we use LVM to move the data, but I have no knowledge when it comes to Solaris.
I have some questions, which I hope someone can assist me in answering:
- Can we perform a live transfer of this data with low risk, using LVM? (Non-disruptive migration of 11Tb)
- Is LVM a wise choice for this task? We have Replicator X too, but have had challenges using it on another Metrocluster.
- I would like to migrate our Sandbox, as a test migration (1.5Tb), and to judge the speed of the data migration. Then move all DEV and QA boxes across, before Production data. There are multiple zones on the hardware mentioned above. Is there no simple way of cloning data from the HP to the NetApp, and then re-synching before going live on the new system?
- Would it be best to have LUNs created with the same volume on the new SAN as the HP EVA sizings, or is it equally simple to create "Best Practise" sized LUNs on the other side before copying data across? Hard to believe it would be equally simple, but we would like to size the LUNs properly.
Please assist, I can get further answers, if there are any questions in this regard.

Similar Messages

  • TS3297 "iTunes was unable to load data class information from Sync Services." What is this solution for the new phone to sync up with my iTunes acct?

    "iTunes was unable to load data class information from Sync Services." What is this solution for this new phone to sync up with my iTunes acct?

    Hi tadhunt,
    Here is an article that directly addresses that error message:
    iTunes for Windows: "Unable to load data class" or "Unable to load provider data" sync services alert
    http://support.apple.com/kb/ts2690
    I hope this helps!
    - Ari

  • Timeouts increased after we moved USR, SAP data files and TLogs to new SAN

    We are having issues with timeouts after we moved our USR, SAP SQL Datafiles and SAP Transaction Logs from our old SAN to a new SAN.
    Timeouts for SAPGUI users are set to 10 minutes.
    We are running Windows Server 2003 with SQL Server 2005.
    The SAP database has 8 datafiles with a total size of about 350GB.
    Procedure we used to move SAP to new SAN:
    1. Attached 3 new SAN Volumes
         -a. USR
         -b. Data Files
         -c. Transaction Logs
    2. Shutdown SAP and SQL services
    3. Alligned the new volumes with a 1024kb offset and gave the Data files and Transaction log volumes a 64kb allocation
        size. (The alignment and 64kb allocation size were not setup for these volumes on the old SAN)
    4. Copied the 3 volumes from old to new.
    5. Changed the new volumes drive letter to the drive letters of the old volumes.
         -a. I had to restart in order to change the USR volume.
         -b. Because of this I had to resetup the sapmnt and saploc shares.
    6. Started SQL services and then SAP services and everything came up just fine.
    The week before we had anywhere from 1 to 9 timeouts per day.
    This week: Monday had 20 and Tuesday had 26.
    On Monday we saw that MD07 was the only transaction that was timing out, but Tuesday had others as well.
    The amount of users in the system is about the same.  The amount of orders going in are about the same.  No big transports went in right before we switched.
    Performance counters that I know about for disk look a lot better on the Data Files.
    - PAGEIOLATCH_SH ms/request is about 50% better
    - Under I/O Performance in DBACOCKPIT:
      - MS/OP is now anywhere from 5 to 30 - Old SAN: 50 to 300
    - The Hit Ratio is over 99% - same as the old SAN
    Looking at Wiley Introscope graphs:
    - The "SAP Host: Average queue length" is about 30% to 40% lower then the old SAN.
    - the "SAP Host: Disk utilization in %" is about the same.
    Questions:
    1. Did we do anything wrong or miss anything with our move procedure?
         a. Do we have to do anything in SQL since we changed volumes even though we kept the drive letters the same?
    2. What other logs or performance counters should I be looking at?
    Thank you,
    Neil

    Our new SAN Vendor is Compellent.  They have been fantastic.  I would highly recommend checking them out.
    The reasons for the timeouts had nothing to do with the SAN...Well kind of anyway.
    I decided to check t-code SM20 to see what users were doing when these timeouts were happening.  What I found was the program R_BAPI_NETWORK_MAINTAIN was being called thousands of times in a matter of 10 to 15 minutes at random times through out the day.  It would take up about 50 to 80 percent of the amount of programs being executed during these times.
    So, I sent this information to our developers and they found out that R_BAPI_NETWORK_MAINTAIN was being called from another program that was looping thousands of times. The trigger to stop the loop wasn't happening fast enough.  They made a change and we haven't seen the timeouts since.
    I think that the performance increases allowed the loop to run faster which caused the slow downs and timeouts to happen more often.
    Thank you to everyone for their help!
    Neil

  • Migration Date when migrating from classic to new g/l

    Hello gurus,
    I am planning a migration from classic to new g/l, but this fiscal year it will be not possible to finish phase 0 (from migration model plan). Is it possible to do it in the middle of the new fiscal year, what are the withdrawls or issues? what do you recommend me? What strategy can I use?
    Thanks,
    Silvia Guillen

    Hi Silvia,
    There is differnce in migration date & activation date. Migration date will always be the firts day of new fiscal year.
    But as on migration date you have to bring your config changes to PRD.
    For details please check the following link -
    https://websmp130.sap-ag.de/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/sapnotes/index2.htm?numm=812919
    https://websmp230.sap-ag.de/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/sapnotes/index2.htm?numm=1070629
    Regards,

  • Access SAP Data Archival file from outside SAP

    Hello Everyone,
    I have a requirement to archive the SAP data, dump that outside SAP in some other system like ILM or BI and build a reporting tool on top of that data.
    So, basically customer want to shutdown the SAP and want to retain data for legal and audit pourpose.
    I was doing some RnD and done archiving of MM_EKKO using SARA. the file got generated with extention .ARCHIVE. I donwnloaded teh file but it is encoded file with all special character in it.
    My question is:
    1. How can I read the archieved SAP data from outside SAP system.
    2. Can we decode the .ARCHIVE file to get it in .DAT format?
    3. Or Is there any other way to access the SAP data outside SAP in a report format.
    Thanks,
    Chintan SOni.

    Hi Chintan,
    1. How can I read the archieved SAP data from outside SAP system.
    For this you could refer SAP Note   460620 - Migrating archive files
    2. Can we decode the .ARCHIVE file to get it in .DAT format?
    As per my knowledge,it's not possible to decode or move to .DAT format.
    3. Or Is there any other way to access the SAP data outside SAP in a report format.
    Refer my first response & the SAP note.
    Hope this will help you.
    Good luck !!
    Gaurav

  • RDM migration from Dell Compellent SAN

    Hello,
    I have to do a filer migration from server 2003 to server 2008 R2.
    But the problem is there are few RDMs are presented to the VM (Physical mode) from Dell Compellent SAN(6.3.5.7).So I am wondering how do I get this RDM migrated to the new server(2008 R2) without any issues.
    I have heard that you can use storage replies to re-create the same RDM volumes,if its possible someone tell me how I can get the latest changes from the LIVE server to Test VM which is on the DR site.
    We are doing an asynchronous replication of everything from Live Storage Center to the DR Storage Center.
    any help would be appreciated.
    thanks
    Joh

    These steps are based off of the vSphere client, not the web client (don't use that often enough to know the exact steps off my head, though they'll be similar).
    - Shut down the original VM (2003)
    - Right click the VM and go to edit settings
    - identify the disk(s) that are the RDMs (probably not "hard disk 1")
    - select the disk and click remove towards the top
    - confirm by clicking OK
    - edit settings on the new VM (2008 R2)
    - click ADD along the top
    - select to add a hard disk
    - select the Raw Device Map option
    - select the correct Compellent disk
    - confirm with OK
    - if the 2008 R2 VM isn't already (still) powered up, power it up now
    - if the 2008 R2 VM is already/still running, go to disk management, select action and rescan
    - assign/change the drive letter on the new disk to meet your needs

  • Migrating from 3.1 to 3.7 - write through for a custom cache store issues

    We're migrating from 3.1 to 3.7. So far the migration and testing has been fairly uneventful, but there is one issue that came up yesterday that seems like it is going to be tricky to debug.
    We have a set of storage-enabled nodes that use a custom CacheStore to read from and write behind to a mongo database. On another node connected to that caching service, read throughs work just fine. (I can set breakpoints on the CacheStore load method and see the load calls coming through just fine) - but what's not working is when the other node does a Cache.put - the Store method on the CacheStore is never called and so far I don't see anything in the logs indicating there is a problem on either side (I'm going to make sure that the coherence logging is up to the highest level on both the nodes today when I'm doing more testing)
    I can see the cache put start to dive into the coherence jar, but I don't have source jars for coherence so it's fairly opague what might be going wrong after the Cache.put(object, object) call. I can see that it dives into various coherence methods, but
    Any ideas on where to start debugging this?
    This setup worked fine on 3.1, and as best we can tell all the API calls were converted over to their proper coherence 3.7 versions, and the coherence.xml files were migrated to use the new xsd etc.

    it seems that the issue might be related to this:
    2012-08-15 14:19:34.086 Tangosol Coherence 3.7.1.5 <Error> (thread=WriteBehindThread:CacheStoreWrapper(com.foo.cache.MongoCacheStore):Foo.com-CMS, member=13): Failed to store key="assetId=DEFAULT;assetStyle=DEFAULT;initial=c;siteId=foosite;"
    2012-08-15 14:19:34.087 Tangosol Coherence 3.7.1.5 <Error> (thread=WriteBehindThread:CacheStoreWrapper(com.foo.configrepo.cache.MongoCacheStore):Foo.com-CMS, member=13): (Wrapped) java.io.StreamCorruptedException: invalid type: 13
         at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:266)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$ConverterFromBinary.convert(PartitionedCache.CDB:4)
         at com.tangosol.net.cache.BackingMapBinaryEntry.getValue(BackingMapBinaryEntry.java:124)
         at com.tangosol.net.cache.ReadWriteBackingMap$CacheStoreWrapper.storeInternal(ReadWriteBackingMap.java:5731)
         at com.tangosol.net.cache.ReadWriteBackingMap$StoreWrapper.store(ReadWriteBackingMap.java:4814)
         at com.tangosol.net.cache.ReadWriteBackingMap$WriteThread.run(ReadWriteBackingMap.java:4217)
         at com.tangosol.util.Daemon$DaemonWorker.run(Daemon.java:803)
         at java.lang.Thread.run(Thread.java:662)
    Caused by: java.io.StreamCorruptedException: invalid type: 13
         at com.tangosol.util.ExternalizableHelper.readObjectInternal(ExternalizableHelper.java:2303)
         at com.tangosol.util.ExternalizableHelper.deserializeInternal(ExternalizableHelper.java:2746)
         at com.tangosol.util.ExternalizableHelper.fromBinary(ExternalizableHelper.java:262)
         ... 7 moreLooks like it is an issue with the serialization? We're primarily using XmlBean, not POF for serialization.
    Any tips on troubleshooting this?
    Edited by: RyanGardner on Aug 15, 2012 7:37 AM
    Edited by: RyanGardner on Aug 15, 2012 7:38 AM

  • Migrating From Window Server 2003 to Window Server 2012 for Web server deployment and Developmemt Machine is on Window Server 2008

    Hi Microsoft Team,
    We need your urgent advice and that is also on priority:
    Issue Description: We need to migrate from
    WINDOW SERVER 2003 to WINDOW SERVER 2012 while the development activity will be carried
    under WINDOW SERVER 2008 as DEVELOPMENT BOX.
    .NET Framework Version: 3.5 ( For both DEVLOPMENT(WINDOW SERVER 2008) and WEBSERVER(to WINDOW SERVER 2012))
     IIS Version: 7.5 (For both DEVLOPMENT(WINDOW SERVER 2008) and WEBSERVER(to WINDOW SERVER 2012))
    Need your quick advice Is that configuration feasible for Development and
    Deployment (Web Server).
    Highly appreciate your response as this will depend which product we need to buy also if you feel any showstopper or concern. Please let us know.

    Hi,
    As suggested by Tim, in order to get better assistance, we can ask for help in the following IIS forum.
    IIS Forum
    http://forums.iis.net/
    In addition, regarding migrating from Windows Server 2003 to Windows Server 2012, the following blog can be referred to for more information.
    Step-By-Step: Active Directory Migration from Windows Server 2003 to Windows Server 2012
    http://blogs.technet.com/b/canitpro/archive/2013/05/27/step-by-step-active-directory-migration-from-windows-server-2003-to-windows-server-2012.aspx
    Best regards,
    Frank Shen

  • How to UNLOCK  queues when data is migrated from legacy system to SAP syste

    Hi  All,
    I need some help regarding queues (SMQ2). XI is been used to migrate data from legacy system to SAP system. Sometimes the queue is getting locked. Once the queue is unlocked then the message is processed correctly. So, the incoming queues must be "unlocked" routinely so that they can process through the system. There is a standard report RSQIWKEX available that can be scheduled in SAP system to automatically unlock the queues. Before using the standard report RSDIWKEX, we made sure that we have added parameter MONITOR QRFC_RESTART_ALLOWED set to "1" in Integration Engine specific configuration (TCODE SXMB_ADM).We are unable to analyze the reason for locking of queues and how to avoid it. If the locking of the queues can not be avoided, is there any way to unlock the queues? We tried executing this report but were not successful in unlocking the queues. Please guide us in solving this issue. It would be really helpful if you can give us a direction in solving this problem.
    Thanks in Advance,
    Shwetha.

    Hi,
    Just check if the Queues are registered in Transaction SMQR.
    If its not, then register the Queue by pressing 'Register' Button
    Sharif.

  • No data after migration from MSSQL 2008 to Oracle 11g

    Hi
    After looong way to migrate database from MSSQL 08 to Oracle 11g (lots of topic on oracle forum was very helpfull), I finally wanted to migrate my data. So, i did like in this http://st-curriculum.oracle.com/obe/db/hol08/sqldev_migration/mssqlserver/migrate_microsoft_sqlserver_otn.htm tutorial, used oracle_ctl , then I got back some data :
    SQL*Plus: Release 11.2.0.1.0 Production on Tue Jun 5 22:37:29 2012
    Copyright (c) 1982, 2010, Oracle. All rights reserved.
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    SQL> SQL>
    Table altered.
    SQL>
    Table altered.
    SQL>
    Table altered.
    SQL>
    Table altered.
    SQL>
    Table altered.
    SQL>
    Table altered.
    SQL>
    Trigger altered.
    SQL>
    Table altered.
    SQL>
    Table altered.
    SQL> Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0
    - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    SQL*Loader: Release 11.2.0.1.0 - Production on Tue Jun 5 22:37:29 2012
    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
    SQL*Loader-522: lfiopn failed for file (log\dbo_zsbddwa.Pracownicy.log)
    SQL*Loader: Release 11.2.0.1.0 - Production on Tue Jun 5 22:37:30 2012
    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
    SQL*Loader-522: lfiopn failed for file (log\dbo_zsbddwa.Dokumenty.log)
    SQL*Loader: Release 11.2.0.1.0 - Production on Tue Jun 5 22:37:30 2012
    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
    SQL*Loader-522: lfiopn failed for file (log\dbo_zsbddwa.sysdiagrams.log)
    SQL*Loader: Release 11.2.0.1.0 - Production on Tue Jun 5 22:37:30 2012
    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
    SQL*Loader-522: lfiopn failed for file (log\dbo_zsbddwa.Spis.log)
    SQL*Plus: Release 11.2.0.1.0 Production on Tue Jun 5 22:37:30 2012
    Copyright (c) 1982, 2010, Oracle. All rights reserved.
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    SQL> 2
    PL/SQL procedure successfully completed.
    SQL> 2
    PL/SQL procedure successfully completed.
    SQL> SQL>
    Table altered.
    SQL>
    Table altered.
    SQL>
    Table altered.
    SQL>
    Table altered.
    SQL>
    Table altered.
    SQL>
    Table altered.
    SQL>
    Trigger altered.
    SQL> Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0
    - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options.
    Ok, I see there are problems with save logs, but ok, I'm moving on to sqldeveloper - open table in migrated database, pressing "Data", and unfortunatly, there were no data. I tryed few times, checked permission for database user, but still the same.
    Anyone have idea, whats wrong could be with it?

    Hi
    After looong way to migrate database from MSSQL 08 to Oracle 11g (lots of topic on oracle forum was very helpfull), I finally wanted to migrate my data. So, i did like in this http://st-curriculum.oracle.com/obe/db/hol08/sqldev_migration/mssqlserver/migrate_microsoft_sqlserver_otn.htm tutorial, used oracle_ctl , then I got back some data :
    SQL*Plus: Release 11.2.0.1.0 Production on Tue Jun 5 22:37:29 2012
    Copyright (c) 1982, 2010, Oracle. All rights reserved.
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    SQL> SQL>
    Table altered.
    SQL>
    Table altered.
    SQL>
    Table altered.
    SQL>
    Table altered.
    SQL>
    Table altered.
    SQL>
    Table altered.
    SQL>
    Trigger altered.
    SQL>
    Table altered.
    SQL>
    Table altered.
    SQL> Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0
    - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    SQL*Loader: Release 11.2.0.1.0 - Production on Tue Jun 5 22:37:29 2012
    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
    SQL*Loader-522: lfiopn failed for file (log\dbo_zsbddwa.Pracownicy.log)
    SQL*Loader: Release 11.2.0.1.0 - Production on Tue Jun 5 22:37:30 2012
    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
    SQL*Loader-522: lfiopn failed for file (log\dbo_zsbddwa.Dokumenty.log)
    SQL*Loader: Release 11.2.0.1.0 - Production on Tue Jun 5 22:37:30 2012
    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
    SQL*Loader-522: lfiopn failed for file (log\dbo_zsbddwa.sysdiagrams.log)
    SQL*Loader: Release 11.2.0.1.0 - Production on Tue Jun 5 22:37:30 2012
    Copyright (c) 1982, 2009, Oracle and/or its affiliates. All rights reserved.
    SQL*Loader-522: lfiopn failed for file (log\dbo_zsbddwa.Spis.log)
    SQL*Plus: Release 11.2.0.1.0 Production on Tue Jun 5 22:37:30 2012
    Copyright (c) 1982, 2010, Oracle. All rights reserved.
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    SQL> 2
    PL/SQL procedure successfully completed.
    SQL> 2
    PL/SQL procedure successfully completed.
    SQL> SQL>
    Table altered.
    SQL>
    Table altered.
    SQL>
    Table altered.
    SQL>
    Table altered.
    SQL>
    Table altered.
    SQL>
    Table altered.
    SQL>
    Trigger altered.
    SQL> Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0
    - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options.
    Ok, I see there are problems with save logs, but ok, I'm moving on to sqldeveloper - open table in migrated database, pressing "Data", and unfortunatly, there were no data. I tryed few times, checked permission for database user, but still the same.
    Anyone have idea, whats wrong could be with it?

  • Spotlight is continuously indexing the data i migrated from an old mac. At first is it said 450 hours and then it vanished. Now it is is saying indexing volumes since last 10 hours

    I bought a new macbook pro Intel core i7 and migrated my data about 260 GB from the previous macbook pro. Since then the spot light is indexing volumes sometimes in between showing the progess bar indicating 450 hours remaining. It vanished after sometime and Indexing volumes showed up again. Obviously i cannot keep the mac on forever, will it begin the indexing all over again.

    You should really read the manual.
    "How do you restore from backup? "
    Restore.  When given the choice, choose to use backup.
    "And how can I check to see if the pics and videos are on my computer somewhere first??"
    They would only be where you put them.  What program did you use to import them?  Pics/vids taken with ipod are not part of the sync process at all.  You should be importing them just as you would with any digital camera.
    If you did not import them, then they are not on your computer.

  • TimeMachine migrates to the wrong date. How do I specifiy which date to migrate from?

    Recently, my MacBook Pro 2011 unibody, 10.7.5 crashed due to bad sectors. After installing a new a hard drive, I preformed a restore via the migration assistance. It all went well except that it restored to not the latest date.
    The cause of this is was because I had a few backups in the Timemachine that are dated far into the future, such as 2015, 2016,etc. They were made when I switched the system time to the future in order to disable some applications that were giving me problems. Right when I did this, the time machine made a backup and now those backups are always considered the "latest" backups.
    So my question is, how do I preform a FULL restore using migration to a specified date, instead of what it thinks is the "latest" backup.
    Any help would be greatly appreciated.

    I at first thought it might be a Mac thing, but I couldn't find a solution. I realized that the date only affected my email, so then I thought it must be a Mozilla problem.

  • Unsupported Data Type (migration from Access 2000)

    Hi,
    I'm trying to convert an MS Access 2000 database, which contains a table with 2 'Binary' data type columns.
    The OMWB does not seem to know this Data Type, and the GUI does not allow me to add it. Is there any other way to add this Data Type ??
    I should somehow get it to convert to Oracle RAW.
    Generating the Oracle Model with the 'Binary' type in the xml file generates errors.
    thanx in advance,
    esther

    Log a ticket with support at otn.oracle.com/migration and give details of this. Include a test case to ascertain whether it is a bug or not.
    B

  • HT201386 Does it use data to migrate from iPhoto?

    Does migrating iPhoto to photos use data or is it all done in the cloud?

    You must get the movie onto your iPad somehow. If you download it while on Wi-Fi you will not use cellular data. Once the movie has been downloaded you will not use cellular data or Wi-Fi.

  • Migrating from BOXI R2 to R3 - Need URL information for CreateNew/Modify.

    Hi,
    Ours is a web application  deployed on websphere and we connect to business objects  from the application for reporting.
    We are using BOXIR2 and are in process of migrating to BO XI R3.
    Within our application we have the functionality to create,edit adhoc reports on business objects.
    For this in R2 we are using the following urls.
    To create new adhoc report     --
    http://machinename:portnumber/businessobjects/enterprise115/desktoplaunch/InfoView/CrystalEnterprise_Webi/new.do
    To modify adhoc report          -- http://machinename:portnumber/businessobjects/enterprise115/desktoplaunch/InfoView/CrystalEnterprise_Webi/modify.do
    These URLs do not work in R3.
    We are looking out for the corresponding urls to be used in R3 for the above functionality to work.
    Any help will be appreciated.
    Thank You.

    You will need to figure those out on your own (or if you are lucky someone else on the forum has already done it and will post).
    Essentially we do not have those urls documented and to access those pages you should be going through Infoview itself and not directly to them.
    Good luck,
    Jason

Maybe you are looking for