Minimizing Downtime during Upgrade

Hi All,
We are in process of upgrade from R/3 4.7 EE 2.00 to ECC 6.0 SR3.
The details of legacy and new landscape is as follows :
Legacy Landscape
OS       Solaris 9
DB       Oracle 9.2.04
SAP     R/3 4.7 EE 2.00
New Landscape
OS       Solaris 10
DB       Oracle 10g
SAP     ECC 6.0 SR3
We have taken the offline backup of Live PRD and restored it on to Stand by server. We have also upgrade oracle 9i to 10g, which has taken around 20 hours.
My question is, we want to minimize the downtime by directly installing Oracle 10g with R/3 4.7 EE and the take the offline/online backup of live PRD system and restored back to oracle 10g enviroment. By doing this we can reduce the downtime by aprrox 15 hrs.
Is this scenario is possible ? Please give your valuable suggestion from your last experiences.
Thanks,
Kshitiz Goyal

> My question is, we want to minimize the downtime by directly installing Oracle 10g with R/3 4.7 EE and the take the offline/online backup of live PRD system and restored back to oracle 10g enviroment. By doing this we can reduce the downtime by aprrox 15 hrs.
Yes - this is possible.
I suggest you get the newest installation CD set for 4.7 which supports Oracle 10g (using sapinst, not R3SETUP):
See note "969519 - 6.20/6.40 Patch Collection Installation : Oracle/UNIX", Section 3:
3.) Retroactive Release of Oracle 10.2
Markus

Similar Messages

  • Guidelines to reduce downtime during upgradation R/3 to ECC6

    hi all........
    Here we implemented SAP R/3 4.6  in the year 2005.
    Now we are planning to upgrade our system to ECC6 unicode .
    We are planning to upgrade our Production and Development system to SAP
    ERP6 EHP4.As part of the upgradation; we had setup a test system with the
    following details:
    Sun Fire 480R (UltraSparc III + 1.05GHz x 4 processor)
    Memory 16 GB and Storage : Sun CSM 200 SATA HDD
    SAP R/3 4.71, single code pages 1100, Oracle 9.2.0.8.0, Solaris 9
    Test system with 1.2TB data copied from Production System.
    We have upgraded the test system using CU&UC method ( Upgrade Guide SAPERP6.0 EHP4 SR1 Abap ). It took 2 days downtime for full upgradation and 8 days for Unicode conversion.
    Requirement :
    To Upgrade the current systems without any additional hardware to SAP
    ERP6 EHP4 Unicode.
    As a part of the above, the cluster has to be upgraded to Sun cluster
    3.2, and operating system to Solaris10 (5.10)
    We are having the following queries:
    1. With only 1.3 TB data, we feel, the down time of 10 days is not
    realistic.Though we had not deviated from the above mentioned upgrade
    guide, what could be the reasons for this down time of 10 days?what arethe affecting factors? Kindly suggest the ways by which we can get
    reduced downtime . The referred Notes are (784118, 855772, 952514,
    936441, 954268, 864861 ) still we are unable to pin point the reason. Abreak up of this down time is given below for reference.
    2. Are there any alternate approved upgradation methods to reduce
    downtime in our present set up? (details of present set up is given below)
    3. How to calculate the down time with reference to(sapnote:857081) to in
    our setup
    Current Production System Details :
    System : Sun Fire V890
    Processor : 1.5GHz UltraSPARC IV+ processors -- 8no.s
    Memory : 64 GB
    Storage : Sun Storagetek 6180: HDD FCAL(16 no.s x 450 GB)
    15,000-rpm 4 Gb/sec 4GB data cache (Storage connected to server using FC
    cable)
    SAP : R/3 4.71 single code pages 1100
    Database : Oracle 9.2.0.8 data-size is around 1.3 TB
    OS : Solaris 9 (5.9)
    Cluster : Sun cluster 3.1
    Network : 1GB ethernet (1000BASE-SX)
    Architecture : Application and database server on Same system
    SAP Development system specification same as the above with 187 GB data
    size. PRD and DEV systems are on cluster; PRD failover to DEV system.
    The time taken for test upgrade - brief details:
    1. Kernel upgraded 6.20 to 6.40 (2hours)
    2. Oracle Upgrade to 10.2.0.4.0 (3.30 hours)
    3. SAP UPGRADE Preparation-Apply support packages (18 hours)
    4. Downtime Phase and Pre-Processing and Finalization Phase (20 hours)
    5. Unicode Conversion-Taking DB Export 109GB(Almost 10% of total DB
    size)(98 hours)
    6. Installation of unicode system using export (95 hours)
    Looking forward to hear from you.
    Best Regards,
    Rajeesh K.P

    Hi .......
    i am very sorry for late reply...............................
    The time taken for test upgrade - brief details:
    1. Kernel upgraded 6.20 to 6.40 (2hours)
    2. Oracle Upgrade to 10.2.0.4.0 (3.30 hours)
    3. SAP UPGRADE Preparation-Apply support packages (18 hours)
    4. Downtime Phase and Pre-Processing and Finalization Phase (20 hours)
    5. Unicode Conversion-Taking DB Export 109GB(Almost 10% of total DB
    size)(98 hours)
    6. Installation of unicode system using export (95 hours)
    2 days downtime means the sum of time used for above 4 activities ..........................
    as you suggested we can do the same 4 different down times......................
    Unicode Conversion-Taking DB Export 109GB(Almost 10% of total DB size) is taking a continuous down time
    of 98 hours (4.5 days ) (Activity 5)+  installation of Unicode system using the Unicode export is taking 95 hours 4.5 days
    (Activity 6)
    i think the we have to do activity 5 & 6 in a single stretch with out users, so it will take a continuous down time of 9 Days
    With this in mind I have a few questions for you to clear a little bit more all the problem.
    Q:1 Did you use more parallel jobs during Downtime than default? and I mean real Downtime where the SAP system is taken down
    No.we have used default number (3 ) of parallel jobs during downtime
    Did the system allow you to use ICNV (incremental converssion) for some tables?
    yes we may use ICNV
    Can you separate the Unicode conversion to a different weekend? or is a must for some business reason?
    Yes , we can separate the Unicode conversion to a different weekend , it will not affect our business
    Did you save the post download Report so you can see what are the most time consuming phases?
    i am not so clear about this question , we have made an activity chart with time duration as per the test upgrade. Or how we can save the post download report from the system
    Looking forward to hear from you.
    Best Regards,
    Rajeesh K.P

  • Major version upgrade of WebLogic with zero/minimal downtime

    From what I can tell, the recommended approach for supporting minimal downtime during major version upgrades (e.g. WL 9 -> WL 10) is to have 2 domains available in the production environment.
    Leave one running to support existing users, upgrade the other domain, then swap to perform the upgrade on the first domain.
    We are planning on starting out with WL 9.1, but moving forward we require very high availability...(99.99%).
    Is this my only option?
    According to BEA marketing literature, service pack upgrades can be applied with "zero" downtime...but if this isn't reality, I'd like to hear more...
    Thanks...
    Chuck

    Have gotten as far as upgrading all of the software, deleting /var/db/.AppleSetupDone, and rebooting.  It brought me back in to Setup Assistant and let me choose "migrate from another mac os x server" and is now sitting waiting for me to take the old server down and boot it into target disk mode.  Which we can probably do Sunday at about 2am or so...
    You know, Setup Assistant should really let you run Software Update BEFORE migrating from another machine.  We have servers that can't be down for SoftwareUpdates in the middle of the day...

  • XPRA to minimize downtime durina upgrade

    Hello Experts,
    I am looking for information about XPRAs and system downtime during upgrade. Is it possible to modify XPRAs that has to be run during upgrade inorder to reduce the downtime, how? Please provide me other suggestions in order to reduce the downtime with XPRAs...
    Thanks...
    Viral

    Hi Viral,
    When you will run the prepare it will give you how many xpra objects are there for conversion.You can work on the xpra objects at uptime period.
    Regarding  parallel process when you will run the upgrade it will ask you for the parameter no of background process and no of parallel process.You can take a bigger value to reduce the downtime provided you have sufficient hardware.
    Regards
    Ashok Dalai
    Edited by: Ashok Dalai on Aug 20, 2009 11:30 AM

  • How to increate the  R3trans processes during upgrade ...

    Dears,
    I'm doing a Sap upgrade to ECC6+EHP4 and I selected the scenario "High resource use (minimal downtime, fast import, archiving off)" in the Configuration module.
    The Downtime module is started  but I saw the gui does not allow to modify the number of processes it's going to use like :
    > MAXIMUM UPTIME PROCESS
    > R3TRANS PROCESSES
    I saw the XPRA_UPG phase is using 1 batch only, despite there are a lot of resources available.
    As I'm planning others upgrade run in the next future for this sistem, I would like to know if there a way to increase the number of these processes, without changing the Scenario "High resource use (minimal downtime, fast import, archiving off)."
    In this run the parameter MAXIMUM UPTIME PROCESS I suspect is set to 1; I would like at least to set it to 2 in the next run .
    But I would like to continue to use this scenario "High resource use (minimal downtime, fast import, archiving off)." in the future, but increasing these parameters MAXIMUM UPTIME PROCESS and R3TRANS PROCESSES.
    I read the 'Troubleshooting and Admistration Guide' but it's not described here; it seems the only way to change these parameter is to choose a completely different scenario (so called "Manual Selection")
    How they can be changed ? Are they written in some file into the upgrade directory ?
    best regards

    Hello Roberto,
    with this option ("High resource use (minimal downtime, fast import, archiving off)") it is not possible to change the key parameters that you're looking for.
    For your case you should select the option "Manual selection of parameters". Please check the piece of log below to see the parameters you can change with this option:
    >> 2009/05/27 15:12:57  START OF PHASE PREP_EXTENSION/INITSUBST
    >>>> Choose configuration <<<<
    Select configuration
    01)  -  Standard resource use (archiving off)
    02)  -  High resource use (archiving off)
    03)  -  High resource use (archiving on)
    04)  *  Manual selection of parameters
    : Manual selection of parameters
    >>>> Archive Mode <<<<
    Choose an upgrade phase for disabling the archive mode. For more information,
    see the upgrade guide.
    If the archive mode is disabled, all production operation has to be stopped.
    01)  -  No disabling of the archive mode (Archiving on)
    02)  *  The archive mode should be disabled in phase STOPSAP_TRANS
    Choose the archive mode:: The archive mode should be disabled in phase STOPSAP_TRANS
    >>>> SGEN Execution Mode <<<<
    Choose an execution strategy for SGEN. For more information, see the upgrade
    guide.
    01)  -  Do not start SGEN during the upgrade.
    02)  *  Fill table GENSETC with relevant loads, but do not run SGEN.
    03)  -  Fill table GENSETC and run SGEN with low resource consumption.
    04)  -  Fill table GENSETC and run SGEN with high resource consumption.
    Choose the SGEN execution mode:: Fill table GENSETC with relevant loads, but do not run SGEN.
    >>>> Batch Configuration and Upgrade Processes <<<<
    You need to supply information about the batch server and the number of
    processes used.
    Enter the host name of your batch server:
    BATCH HOST: SAP_EXAMPLE
    Enter the maximum number of batch processes during the upgrade:
    BATCH PROCESSES: 5
    Enter the maximum number of parallel processes during uptime:
    MAXIMUM UPTIME PROCESSES: 1
    Enter the number of parallel import processes during downtime:
    R3TRANS PROCESSES: 3
    As you can see, all these parameters are editable with this option. You should consider it in your future upgrades, in my opinion.
    Best regards,
    Tomas Black

  • Downtime for upgrade from SP14 to SP23 - PI 7.0

    Hello All,
    What is the estimated downtime for upgrading PI7.0 from SP14 to SP23. We only have one application server in the environment and the decision to upgrade will happen based on the downtime information . Appreciate any response . Thanks Rahul.

    Hi Rahul,
    We have recently upgraded our PI 7.0 system from SP 21 to SP 21.
    Basis team has requested for 19 hrs downtime. But the actual downtime that was required was less than 10 hours.
    During upgrade, you have a downtime minimisation approach which would enable you to perform few actvities online without bringing down the system.
    To follow this approach, Goto the transaction code "SAINT" --> Extras --> Settings --> Import Queue --> Import Mode.
    There would be an option called "Downtime-minimized". Check that option so that you would be able to reduce the downtime during the upgrade.
    Regards,
    Subbu

  • Need to convince manager on posting downtime during build of setup tables

    Can any one suggest some pointers in writing a business case for downtime during build up of setup tables
    this is for Sales Application component.

    Delta loads especially for LO extraction is governed by the V3 jobs scheduled in the source system.
    These V3 jobs will post into the delta queue even if there is some activity in the R/3 system.
    To ensure that no transactions get missed - a downtime is taken in the source system so that no documents are changed.
    If you want to avoid downtime :
    1. You should know which documents changed during the period the extractors were refreshed. this is in order to fo a full repair request.
    Typical procedure involves :
    1. Take downtime.
    2. Clear delta queues into SAP BI
    3. Deschedule V3 jobs
    4. Move enhancements into the production R/3 system.
    5. Do an init without data transfer into SAP BI to create the delta queues
    6. Schedule the V3 collection jobs
    7. open the system for users to post / change documents etc
    8. Continue deltas into SAP BI
    If you want to avoid downtime then
    1. Clear delta queues into SAP BI
    2. Deschedule V3 jobs
    3. Move enhancements into R/3 production
    4. Init without data transfer
    5. Schedule V3 jobs
    6. Continue deltas
    7. Perform full repair request for documents that have changed between steps 1 and 5 into SAP BI.
    I have gioven the steps for an enhancement - but then any activity involving changes to V3 extractions or Upgrade activity in SAP BI would fall into the same category.

  • Minimizing outages during deployment

    I've got a largish (for us) WLI application with some 50 or EJB's and 15 webapps. (The Integration processes get compiled into EJB's and webapps). The entire EAR file is around 15Mb, and because of the size of this behemoth, we're looking at splitting it up into smaller pieces.
    However, whether or not this thing gets split, we have an issue with downtime during deployment. The app is supporting our manufacturing processes, and our uptime requirements are as close to 24x7 as possible. With enhancement requests and bug fixes we'll likely be updating the app every two weeks. The entire application is message driven: JMS using MQ as a provider, MDB's triggering the JPD's. In most cases the end users (VMS clients written in Fortran/C) send a message and are waiting for a response. Clients are running manufacturing processes and cannot be delayed without financial impact.
    What is the best practice around deployment in a production environment, without disrupting the end users? JMS will handle buffering, etc, but how do we minimize the time of the outage during an upgrade? Documentation around deployment best-practices is scarce at best.
    thanks
    mike

    Hi,
    Regarding current situation, I suggest we turn on the client logging functionality, run the following command on the WDS server:
    WDSUTIL /Set-Server /WDSClientLogging /Enabled:Yes
    Then, you can find setup logs from the client computer by referring to following article, and see which driver failed to install.
    How to enable logging in Windows Deployment Services (WDS) in Windows Server 2003, Windows Server 2008, Windows Server 2008 R2, and in Windows Server 2012
    http://support.microsoft.com/kb/936625/en-us
    Then, we can try to integrate this driver in the captured image and try to deploy again.
    Keep post.
    Kate Li
    TechNet Community Support

  • Migrating Hyper-V 2008 R2 HA Clustered to Hyper-V 2012R HA Clustered with minimal downtime to new hardware and storage

    Folks:
    Alright, let's hear it.
    I am tasked with migrating an existing Hyper-V HA Clustered environment from v2008R2 to new server hardware and storage running v2012R2.
    Web research is not panning out, it seems that we are looking at a lot of downtime, I am a VMware guy and I would do likely a V2V migration at this point with minimal downtime.
    What are my options in the Hyper-V world?  Help a brother out.

    Merging does require some extra disk space, but not much. 
    In most cases the data in the differencing disk is change, and not additional files.
    The absolute worst case is that the amount of disk space necessary would be the total of the root plus the snapshot.
    Quite honestly, I have seen merges succeed with folks being down to 10 GB free.
    But, low disk free space will cause the merge to take longer. So you always want to free up storage space to speed up the process.
    Merge is designed to not lose data.  And that is really what takes the time in the background.  To ensure that a partial merge will still allow a machine to run, and a full merge has everything.
    Folks have problems when their free space hits that critical level of 10GB, and if they have some type of disk failure during the process.
    It is always best to let the merge process happen and do its work.  You can't push it, and you cannot stop it once it starts (you can only cause it to pause).  That said, you can break it by trying to second guess or manipulate it.
    Brian Ehlert
    http://ITProctology.blogspot.com
    Learn. Apply. Repeat.

  • Schema changes, minimal downtime

    We are a software development company, using Oracle 10g (10.2.0.2.0). We need to implement schema changes to our application, in a high traffic environment, with minimal downtime. The schema changes will probably mean that we have to migrate data from the old schema to new or modified tables.
    Does anyone have any experience with this, or a pointer to a 'best practices' document?

    It really depends on what "minimal" entails and how much you're willing to invest in terms of development time, testing, hardware, and complexity in order to meet that downtime requirement.
    At the high end, you could create a second database either as a clone of the current system that you would then run your migration scripts against or as an empty database using the new schema layout, then use Streams, Change Data Capture, or one of Oracle's ETL tools like Warehouse Builder (which is using those technologies under the covers) to migrate changes from the current production system to the new system. Once the new system is basically running in sync with the old system (or within a couple of seconds), you can shut down the old system and switch over to the new system. If the application front end can move seamlessly to the new system, and you can script everything else, you can probably get downtime to the 5-10 second range, less if both versions of the application can run simultaneously (i.e. a farm of middle-tier application servers that can be upgraded 1 by 1 to use the new system).
    Of course, at this high end, you're talking about highly non-trivial investments of time/ money/ testing and a significant increase in complexity. If your definition of 'minimal' gets broader, the solutions get a lot easier to manage.
    Justin

  • Anybody know the estimate downtime for upgrade 10TB data 9i DB  to 10gr2

    Anybody know the estimate downtime for upgrade 10TB data 9i DB to 10gr2

    Depend of the choosing method.
    Depend if you move database, or if you stay on same server box.
    Depend of processor you have.
    In fact, the size of database is not directly implied during upgrade, that would be more regarding number of objects.
    If I said you one hour, that won't be a big help. Manual upgrade may be very quick, but estimate time is not an easy task. Test on your test server and estimate for your production.
    Nicolas.

  • Error while runnning autoconfig on apps tier during upgrade

    Hi,
    Error while runnning autoconfig on apps tier during upgrade from 11.5.9 to 11.5.10.2
    below is the error message.
    Enter the APPS user password :
    The log file for this session is located at: /u01/app/tinst/tinstappl/admin/TINST_dba5/log/05031134/adconfig.log
    AutoConfig is configuring the Applications environment...
    AutoConfig will consider the custom templates if present.
    Using APPL_TOP location : /u01/app/tinst/tinstappl
    Classpath : /u01/app/tinst/tinstcomn/util/java/1.6/jdk1.6.0_18/jre/lib/rt.jar:/u01/app/tinst/tinstcomn/util/java/1.6/jdk1.6.0_18/lib/dt.jar:/u01/app/tinst/tinstcomn/util/java/1.6/jdk1.6.0_18/lib/tools.jar:/u01/app/tinst/tinstcomn/java/appsborg2.zip:/u01/app/tinst/tinstcomn/java
    Using Context file : /u01/app/tinst/tinstappl/admin/TINST_dba5.xml
    Context Value Management will now update the Context file
    Exception in thread "main" java.lang.NoClassDefFoundError: oracle/jdbc/OracleDriver
    at oracle.apps.ad.util.DBUtil.registerDriver(DBUtil.java:153)
    at oracle.apps.ad.util.DBUtil.<init>(DBUtil.java:102)
    at oracle.apps.ad.tools.configuration.FileSysDBCtxMerge.getDBConnection(FileSysDBCtxMerge.java:759)
    at oracle.apps.ad.tools.configuration.FileSysDBCtxMerge.initializeParams(FileSysDBCtxMerge.java:147)
    at oracle.apps.ad.tools.configuration.FileSysDBCtxMerge.setParams(FileSysDBCtxMerge.java:128)
    at oracle.apps.ad.context.CtxValueMgt.mergeCustomInFiles(CtxValueMgt.java:1762)
    at oracle.apps.ad.context.CtxValueMgt.processCtxFile(CtxValueMgt.java:1579)
    at oracle.apps.ad.context.CtxValueMgt.main(CtxValueMgt.java:709)
    Caused by: java.lang.ClassNotFoundException: oracle.jdbc.OracleDriver
    at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
    at java.security.AccessController.doPrivileged(Native Method)
    at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:307)
    at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
    ... 8 more
    ERROR: Context Value Management Failed.
    Terminate.
    The logfile for this session is located at:
    /u01/app/tinst/tinstappl/admin/TINST_dba5/log/05031134/adconfig.log
    Please let me know your suggestions to resolve this issue.
    Regards,
    Sreenivasulu.

    Hi,
    DB tier autoconfig was successfully completed.
    We have already checked the above mentioned metalink ID's, but still the issue is same.
    Can you please advice any other solutions.
    Regards,
    Sreenivasulu.

  • Error during Upgrade from 4.6c to ECC 6.0

    Hi All,
      We are facing an error when upgrading from 4.6c to ECC 6.0. We are facing this error on the table COEP - runtime object inconsistancy. What we found is there is ERP upgrade has created new extra fields in the table. In log file the error is specified as : Duplicate Field name, But we not able to find the duplicate field name in the table.  Please kindly help as early as possible. The upgrade process is stuck.
    Regards
    Anil Kumar K

    Hi Anil,
    Is this issue fixed? Can i know how you fixed it?
    replied to your message Re: How to adopt the index changes during upgrade.
    Thanks,
    Somar

  • Error in transaction KEA0 during upgrade

    Dear All,
    During upgrading the custmer system from 4.7 to ECC 6.0 we found some error in the transaction KEA0 while trying to activate the cross client for an operating concern.
    The activation logs says that there are the syntax errors in the gereted subroutine pool: RK2O0100.
    I however corrected the error and activated the code but again when I am trying to execute the same it gave me the same error and when I go to the sub routine pool I found that the changes are again undone and the previos version is activated. This happens repeatedly that the code is corrected and activated by me and says no error and ones I log off and login again it says the same error.
    Help will be appriciated.
    Thanks iin advance....
    Abhi...

    Dear Raymond,
    Thankds for your help..........
    I tring to run RKEAGENV it shows the error message with STOP button saying that the field WTGBTR not contained in the nametab for table CE1E_B1 and when Io run RKEAGENF it gives a short dump saying that entry for CE10100 is not allowed for TTABS.....
    Please help.

  • Disck Space not enough during upgrade CUCM 9.1(2)SU1 to 9.1(2)SU2a

    Hi,
    I face an issues during upgrade my current CUCM 9.1(2)SU1 to 9.1(2)SU2a, it show with this error message "There is not enough disk space in the common partition to perform the upgrade, for steps to resolve this condition please refer to CUCM 9.1(1) release notes or view defect CSCuc63312 in bug toolkit on Cisco.com".  I had check the bug id and got 2 way to do it:-
    1. reduce the amount of traces on the system, but I not ready want to do this
    2. install a COP file name: ciscocm.free_common_space_v1.0.cop.sgn
    If I select option 2, will it have any risk? Is it mean 9.1(2) - inactive version will be clear out from disk space or it will clear out 2 version - active and inactive version from disk space?
    I need have some advice on this before I perform.
    Thanks.

    Hi,
    As per the following link
    http://www.cisco.com/c/en/us/td/docs/voice_ip_comm/cucm/rel_notes/9_1_1/CUCM_BK_R6F8DBD4_00_release-notes-for-cucm-91/CUCM_BK_R6F8DBD4_00_release-notes-for-cucm-91_chapter_011.html
    You will not be able to switch to the previous version after the COP file is installed. For example, if you are upgrading from Cisco Unified Communications Manager 9.0(1) to Cisco Unified Communications Manager 9.1(1) and the previous version is Cisco Unified Communications Manager 8.6, the COP file clears the space by removing the 8.6 version data that resides in the common partition. So after you apply the COP file, you will not be able to switch to the 8.6 version."
    Additionally, regarding the first option, if you do not want to reduce the tracing levels you can still delete some old traces using RTMT which gives you an option of deleting the traces from the server as well as transferring / downloading them to your pc simultaneously by checking the 'delete from server' option on last page of log file collection option.
    HTH
    Manish

Maybe you are looking for

  • I am thinking of buying a MacBook Air and am wondering how to get the content of my iPad into iTunes.

    I have iTunes on an old pc and am thinking of buying a MacBook Air. I am wondering how to get the content of my iPad on to the Mac as I know that sync only happens in one direction, from pc to device. I assume therefore that when I attach my pad to t

  • Imaq windraw vs. Imaq image display

    Until recently, i've been using the imaq windraw vi to display the real time images i capture. Now I am trying to use the imaq image display vi instead...the only problem is that it slows my application way down. Does anyone know why it would slow th

  • Restoring backed up iTunes to sync to Nano

    I had to restore my hard drive, but did back up my iTunes folder. Now when i try to access the application I get " file of iTunes Library.it cannot be read because it was created by a newer version of iTunes". I would like to keep content on iPod so

  • Shipment Cost Integration to COPA

    Hi , Can any one help me to understand the baisc functionaing of Shiptment Costs handling in SD and itz integration to CO-PA, Any doument link is also most welcome.

  • Photoshop Touch Refund

    I have always been a fan of adobe's products. Photoshop being my primary tool for doing graphic design work. I decided to buy your adobe photoshop touch for my new tablet I had just received thinking to get similar results. Ie: Being able to draw an