Reuse existing MaxDB datafiles during migration

Hello all,
We did test heterogeneous migration of the SAP system with MaxDB 7.7 using SAPInst together with Migration Monitor. Now we are planning to proceed with productive migration.
Creating of datafiles during test run took 4 hours, so I am interested if there is some how to reuse existing datafiles and logfiles of MaxDB. Because when I configure SAPInst and I define number of log volumes which I need, SAPInst is asking if I want to reuse the logfiles (I respond "OK"). But when I configure number of datafiles which I need, I just receive input error regarding not enough of free space.
Is there any way how to reuse these MaxDB datafiles and not deleting and recreating it ?
Thanks in advance for any suggestions.
Best Regards,
Emil

> Is there any way how to reuse these MaxDB datafiles and not deleting and recreating it ?
No, there's no way to reuse data volumes.
One of the reasons for that is, that the data volumes are chained over pointers in the respective volume headers and have to fit into the chain of data volumes. Also the the data volumes have to fit to what is configured in the database parameters.
You can however tell MaxDB not to format the data volumes, by setting the parameter FormatDataVolume (FORMAT_DATAVOLUME) to NO.
This setting however means that the volumes are not completely formatted and only the space is allocated at the file system.
Anyhow - are you positively sure that you need to perform a heterogeneous system copy?
Be aware that this is really only necessary when you want to switch the platforms byte-order.
regards,
Lars

Similar Messages

  • Error during migration of ADF project from Jdev 12 to jdev 11.1.1.7

    Hi all,
       I have created a ADF project in Jdeveloper 12c.during migration from 12c to jdev11g everything was normal.but when i tried to deploy it over integrated weblogic 11g of jdeveloper,it created error-
    9 Sep, 2014 8:56:39 PM IST> <Error> <J2EE> <BEA-160197> <Unable to load descriptor C:\Users\Mayank\AppData\Roaming\JDeveloper\system11.1.1.7.40.64.93\o.j2ee\drs\ADF_FUSION_POC/META-INF/weblogic-application.xml of module ADF_FUSION_POC. The error is weblogic.descriptor.DescriptorException: Unmarshaller failed
      at weblogic.descriptor.internal.MarshallerFactory$1.createDescriptor(MarshallerFactory.java:161)
      at weblogic.descriptor.BasicDescriptorManager.createDescriptor(BasicDescriptorManager.java:323)
      at weblogic.application.descriptor.AbstractDescriptorLoader2.getDescriptorBeanFromReader(AbstractDescriptorLoader2.java:788)
      at weblogic.application.descriptor.AbstractDescriptorLoader2.createDescriptorBean(AbstractDescriptorLoader2.java:409)
      at weblogic.application.descriptor.AbstractDescriptorLoader2.loadDescriptorBeanWithoutPlan(AbstractDescriptorLoader2.java:759)
      at weblogic.application.descriptor.AbstractDescriptorLoader2.loadDescriptorBean(AbstractDescriptorLoader2.java:768)
      at weblogic.application.ApplicationDescriptor.getWeblogicApplicationDescriptor(ApplicationDescriptor.java:324)
      at weblogic.application.internal.EarDeploymentFactory.findOrCreateComponentMBeans(EarDeploymentFactory.java:181)
      at weblogic.application.internal.MBeanFactoryImpl.findOrCreateComponentMBeans(MBeanFactoryImpl.java:48)
      at weblogic.application.internal.MBeanFactoryImpl.createComponentMBeans(MBeanFactoryImpl.java:110)
      at weblogic.application.internal.MBeanFactoryImpl.initializeMBeans(MBeanFactoryImpl.java:76)
      at weblogic.management.deploy.internal.MBeanConverter.createApplicationMBean(MBeanConverter.java:89)
      at weblogic.management.deploy.internal.MBeanConverter.createApplicationForAppDeployment(MBeanConverter.java:67)
      at weblogic.management.deploy.internal.MBeanConverter.setupNew81MBean(MBeanConverter.java:315)
      at weblogic.deploy.internal.targetserver.operations.ActivateOperation.compatibilityProcessor(ActivateOperation.java:81)
      at weblogic.deploy.internal.targetserver.operations.AbstractOperation.setupPrepare(AbstractOperation.java:295)
      at weblogic.deploy.internal.targetserver.operations.ActivateOperation.doPrepare(ActivateOperation.java:97)
      at weblogic.deploy.internal.targetserver.operations.AbstractOperation.prepare(AbstractOperation.java:217)
      at weblogic.deploy.internal.targetserver.DeploymentManager.handleDeploymentPrepare(DeploymentManager.java:747)
      at weblogic.deploy.internal.targetserver.DeploymentManager.prepareDeploymentList(DeploymentManager.java:1216)
      at weblogic.deploy.internal.targetserver.DeploymentManager.handlePrepare(DeploymentManager.java:250)
      at weblogic.deploy.internal.targetserver.DeploymentServiceDispatcher.prepare(DeploymentServiceDispatcher.java:159)
      at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer.doPrepareCallback(DeploymentReceiverCallbackDeliverer.java:171)
      at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer.access$000(DeploymentReceiverCallbackDeliverer.java:13)
      at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer$1.run(DeploymentReceiverCallbackDeliverer.java:46)
      at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:528)
      at weblogic.work.ExecuteThread.execute(ExecuteThread.java:209)
      at weblogic.work.ExecuteThread.run(ExecuteThread.java:178)
    Caused by: com.bea.xml.XmlException: weblogic.descriptor.BeanAlreadyExistsException: Bean already exists: "weblogic.j2ee.descriptor.wl.LibraryRefBeanImpl@874a85bf(/LibraryRefs[[CompoundKey: adf.oracle.domain]])"
      at com.bea.staxb.runtime.internal.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:54)
      at com.bea.staxb.runtime.internal.RuntimeBindingType$BeanRuntimeProperty.setValue(RuntimeBindingType.java:539)
      at com.bea.staxb.runtime.internal.AttributeRuntimeBindingType$QNameRuntimeProperty.fillCollection(AttributeRuntimeBindingType.java:381)
      at com.bea.staxb.runtime.internal.MultiIntermediary.getFinalValue(MultiIntermediary.java:52)
      at com.bea.staxb.runtime.internal.AttributeRuntimeBindingType.getFinalObjectFromIntermediary(AttributeRuntimeBindingType.java:140)
      at com.bea.staxb.runtime.internal.UnmarshalResult.unmarshalBindingType(UnmarshalResult.java:200)
      at com.bea.staxb.runtime.internal.UnmarshalResult.unmarshalDocument(UnmarshalResult.java:169)
      at com.bea.staxb.runtime.internal.UnmarshallerImpl.unmarshal(UnmarshallerImpl.java:65)
      at weblogic.descriptor.internal.MarshallerFactory$1.createDescriptor(MarshallerFactory.java:150)
      ... 27 more
    Caused by: weblogic.descriptor.BeanAlreadyExistsException: Bean already exists: "weblogic.j2ee.descriptor.wl.LibraryRefBeanImpl@874a85bf(/LibraryRefs[[CompoundKey: adf.oracle.domain]])"
      at weblogic.descriptor.internal.ReferenceManager.registerBean(ReferenceManager.java:231)
      at weblogic.j2ee.descriptor.wl.WeblogicApplicationBeanImpl.setLibraryRefs(WeblogicApplicationBeanImpl.java:1440)
      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
      at java.lang.reflect.Method.invoke(Method.java:597)
      at com.bea.staxb.runtime.internal.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:48)
      ... 35 more
    .>
    <9 Sep, 2014 8:56:39 PM IST> <Error> <Deployer> <BEA-149605> <Failed to create App/Comp mbeans for AppDeploymentMBean ADF_FUSION_POC. Error - weblogic.management.DeploymentException: Unmarshaller failed.
    weblogic.management.DeploymentException: Unmarshaller failed
      at weblogic.application.internal.EarDeploymentFactory.findOrCreateComponentMBeans(EarDeploymentFactory.java:193)
      at weblogic.application.internal.MBeanFactoryImpl.findOrCreateComponentMBeans(MBeanFactoryImpl.java:48)
      at weblogic.application.internal.MBeanFactoryImpl.createComponentMBeans(MBeanFactoryImpl.java:110)
      at weblogic.application.internal.MBeanFactoryImpl.initializeMBeans(MBeanFactoryImpl.java:76)
      at weblogic.management.deploy.internal.MBeanConverter.createApplicationMBean(MBeanConverter.java:89)
      Truncated. see log file for complete stacktrace
    Caused By: weblogic.descriptor.BeanAlreadyExistsException: Bean already exists: "weblogic.j2ee.descriptor.wl.LibraryRefBeanImpl@874a85bf(/LibraryRefs[[CompoundKey: adf.oracle.domain]])"
      at weblogic.descriptor.internal.ReferenceManager.registerBean(ReferenceManager.java:231)
      at weblogic.j2ee.descriptor.wl.WeblogicApplicationBeanImpl.setLibraryRefs(WeblogicApplicationBeanImpl.java:1440)
      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
      Truncated. see log file for complete stacktrace
    >
    <9 Sep, 2014 8:56:39 PM IST> <Error> <Deployer> <BEA-149265> <Failure occurred in the execution of deployment request with ID '1410276398843' for task '0'. Error is: 'weblogic.management.DeploymentException: Unmarshaller failed'
    weblogic.management.DeploymentException: Unmarshaller failed
      at weblogic.application.internal.EarDeploymentFactory.findOrCreateComponentMBeans(EarDeploymentFactory.java:193)
      at weblogic.application.internal.MBeanFactoryImpl.findOrCreateComponentMBeans(MBeanFactoryImpl.java:48)
      at weblogic.application.internal.MBeanFactoryImpl.createComponentMBeans(MBeanFactoryImpl.java:110)
      at weblogic.application.internal.MBeanFactoryImpl.initializeMBeans(MBeanFactoryImpl.java:76)
      at weblogic.management.deploy.internal.MBeanConverter.createApplicationMBean(MBeanConverter.java:89)
      Truncated. see log file for complete stacktrace
    Caused By: weblogic.descriptor.BeanAlreadyExistsException: Bean already exists: "weblogic.j2ee.descriptor.wl.LibraryRefBeanImpl@874a85bf(/LibraryRefs[[CompoundKey: adf.oracle.domain]])"
      at weblogic.descriptor.internal.ReferenceManager.registerBean(ReferenceManager.java:231)
      at weblogic.j2ee.descriptor.wl.WeblogicApplicationBeanImpl.setLibraryRefs(WeblogicApplicationBeanImpl.java:1440)
      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
      Truncated. see log file for complete stacktrace
    >
    <9 Sep, 2014 8:56:39 PM IST> <Warning> <Deployer> <BEA-149004> <Failures were detected while initiating deploy task for application 'ADF_FUSION_POC'.>
    <9 Sep, 2014 8:56:39 PM IST> <Warning> <Deployer> <BEA-149078> <Stack trace for message 149004
    weblogic.management.DeploymentException: Unmarshaller failed
      at weblogic.application.internal.EarDeploymentFactory.findOrCreateComponentMBeans(EarDeploymentFactory.java:193)
      at weblogic.application.internal.MBeanFactoryImpl.findOrCreateComponentMBeans(MBeanFactoryImpl.java:48)
      at weblogic.application.internal.MBeanFactoryImpl.createComponentMBeans(MBeanFactoryImpl.java:110)
      at weblogic.application.internal.MBeanFactoryImpl.initializeMBeans(MBeanFactoryImpl.java:76)
      at weblogic.management.deploy.internal.MBeanConverter.createApplicationMBean(MBeanConverter.java:89)
      Truncated. see log file for complete stacktrace
    Caused By: weblogic.descriptor.BeanAlreadyExistsException: Bean already exists: "weblogic.j2ee.descriptor.wl.LibraryRefBeanImpl@874a85bf(/LibraryRefs[[CompoundKey: adf.oracle.domain]])"
      at weblogic.descriptor.internal.ReferenceManager.registerBean(ReferenceManager.java:231)
      at weblogic.j2ee.descriptor.wl.WeblogicApplicationBeanImpl.setLibraryRefs(WeblogicApplicationBeanImpl.java:1440)
      at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
      at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
      at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
      Truncated. see log file for complete stacktrace
    >
    #### Cannot run application ADF_FUSION_POC due to error deploying to IntegratedWebLogicServer.
    [08:56:39 PM] ####  Deployment incomplete.  ####
    [08:56:39 PM] Remote deployment failed (oracle.jdevimpl.deploy.common.Jsr88RemoteDeployer)
    [Application ADF_FUSION_POC stopped and undeployed from Server Instance IntegratedWebLogicServer]
    <Logger> <error> ServletContainerAdapter manager not initialized correctly.
    how to solve that error.Thanx

    Hi Shakif,
    Ideally it should work and should be able to see in Changelist of target system.
    Could you tell me exact error?
    Thanks
    Satish

  • Tablespace assignment during migration

    Hello all,
    I've a question about migration workbench: Migration workbench gave two options at the time of repository creation. Local repository or oracle repository. Is it ok to select the local repository instead of the oracle repository? I've tested the migration process multiple times and my idea was to not recreate the repository in oracle each time I test it. I selected the local option during repository creation.
    Next, I deleted the <DATABASENAME> default tablespace in oracle model as we already pre-created three tablespaces in oracle: OMWB_DATA, OMWB_INDEX, OMWB_TEMP for the purpose of migration.
    I then seleced the two users <DATABASENAME> and omwb_emulation and assigned them the default and temp tablespaces. Default = OMWB_DATA. Temp = OMWB_TEMP.
    I just wanted to verify if this is an acceptable way of doing the migration. Because, after the migration is complete, the <DATABASENAME> tablespace has the schema objects but the omwb_emulation tablespace is empty.

    The repository which is created at initial launch, is used to store meta data during a migration, and as you say save the state of your "migration project" to relaunch at a later stage. There should be no difference whichever option you choose.
    If you capturing the same database over and over again, you can reuse, however if you a migrating different databases, it is better to have a new repository. It doesn't take long to create.
    As to choosing of tablespaces etc. By default we will create a tablespace for each database we capture. However there is an option where you can override our default choice and select tablespaces from your targeted database instance. See tools menu. You can then assign schema's to this tablespace, and remove the one we create from the Oracle model.
    OMWB_EMULATION, is a schema, that just contains PL/SQL functions etc. to support your migration. You only need this once or not at all (depending on whether your code references those functions) ...
    Donal

  • OOF not working for mailboxes homed on Exchange 2010 server during migration

    Hi There
    I'm troubleshooting an issue during a exchange 2010 to exchange 2013 migration where the OOF is not working for users homed on the exchange 2010 mailbox servers. This applies to users using outlook 2010 and outlook 2013 .OWA works perfectly and you can also
    enable OOF via exchange powershell.
    Users homed on the exchange 2013 servers does not have any issues.
    The exchange 2010 servers is on SP3 rollup 4 and the 2013 servers on SP1 only without any rollups installed.
    So my question will be - have anyone experienced this issue during migration and will patching the exchange 2010 servers to rollup 5 or 6 and the exchange 2013 to sp1 rollup 5 fix the OOF issue then? If requested the client to patch to these levels, but
    in the meantime I'm checking if anyone else had similar issues.
    Any pointers will be welcome.

    Hi ,
    1.Please check the problematic exchange 2010 user account can able to access the ews virtual directory internal url and you can check that via browser's.It will prompt for the user name and password , please provide that .
    EWS url wopuld be like - https://mail.youdomain.com/EWS/Exchange.asmx
    2.Then the second step would be autodiscover check for the problematic user account.you can find that via test email configuration .There you came to know whether outlook is fetching up the proper url for ews and autodiscover.
    3.Then please un check the proxy settings in browsers and check the same.
    4.please use the command to check the auto discover 
    test-outlookwebservices
    Please check the below links also.
    http://exchangeserverpro.com/exchange-2013-test-outlook-web-service/
    http://www.proexchange.be/blogs/exchange2007/archive/2009/07/14/your-out-of-office-settings-cannot-be-displayed-because-the-server-is-currently-unavailable-try-again-later.aspx
    http://port25guy.com/2013/07/20/user-cannot-set-an-out-of-office-in-exchange-20102013-or-eventid-3004-appears-in-your-event-log/
    http://www.theemailadmin.com/2010/06/exchange-server-2010-out-of-office/
    Regards
    S.Nithyanandham
    Thanks S.Nithyanandham

  • Reuse existing interfaces

    Hello experts,
    Could someone please help me understand what the best approach would be to reuse existing interfaces.
    For eg: I have a FIle to iDoc scenario. Now I would need to create another scenario say for eg; File to web service.
    How best can I make use of the File side development which I am already done with.
    I am a newbie trying to learn PI. I am on PI 7.1 EHP1.
    Thank you.

    Scenario: We have three different interfaces.
    a) File to File
    b) File to web service ( Sender interface is similar to Sender of (a) )
    c) web service to File ( Receiver interface is similar to that of (a) )
    I will be creating 2 SI's. Each for Inbound and Outbound.
    There will be one BO ( I am assuming this to be PI )
    SI - Inbound will have two operation's, one for file and the other for web service.
    SI - Outbound will have two operation's, one for file and the other for web service.
    The number of SI wil depend of the kind of data to exchange. if those interfaces dont have nothing in common (data) yuo will have to define
    2 outbound SI for file and another for web services.
    now,  the first two file exhange common data ( data concerning customer) you can define 1 SI_customer with two operation and another SI for the web services and another for the inbound file structure (scenario c)
    when modeling will depend as i mentioned before of the kind of data.you wil define 2 models (one for interface a and b) and a second model for your interface WS-File(c)
    take a look to this presentation:
    PI Best Practices: Modeling
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/303856cd-c81a-2c10-66bf-a4af539b8a3e?quicklink=index&overridelayout=true
    if it is no clear let me know
    Edited by: Rodrigo Alejandro Pertierra on Sep 24, 2010 12:18 PM

  • Does a migrate interrupts VM during migration

    Couple of years ago I did a migration of VM on 2008 host with VMM2008. I don't remember if there is an interruption during migration.
    I need to perform one time backup of live VM. I cannot stop this machine for making an export right now.
    So need a confirmation of MIGRATE action. Worst case scenario I can do a backup later with the EXPORT option.
    VM on 2008. VMM2008 available.
    Thx.
    "When you hit a wrong note it's the next note that makes it good or bad". Miles Davis

    It depends on your server OS.
    If you're running Windows Server 2008 R2 and later then you have live migration available which allows you to migrate without any downtime. Live migration was introduced in 2008 R2, so it isn't available in 2008 RTM. Quick Migration is available
    in 2008 RTM, however that does require some downtime. In either case I believe you need to be using a failover cluster to use it.
    If you're running Windows Server 2012 then things are a LOT easier, as you now have the use of Shared Nothing Live Migration, which essentially allows you to migrate the server with no downtime, and between two physical servers, without the need for them
    to be part of a cluster. Just two boxes sat in a rack on a network, switching a VM from one to the other.

  • Altering database during migration from MS Access 2000

    Hi to all Oracle forum members.
    I want to migrate a MS Access 2000 database to Oracle 9i release 2. I want to know how can I alter the database design during the migration progress. My plan is to split an existing table to more and re-arrange the data and maybe introduce some new tables as well. These changes cane be made using the Migration Workbench or after migrating to Oracle? What is the most efficient way to do that and how can I accomplish these changes?
    Thanks in advance...
    A desperate new Oracle user... :)

    Hello patriotaki! I used Oracle Migration Workbench and everything seems to be fine..just take care on code page migration isssues ...have fun!!!

  • DB6CONV/DB6 - Is there a way to reuse existing compression dictionary

    As some of you probably know, DB6CONV handles compression by retaining the Compression flag while performing the table move , but unfortunately not the compression dictionary. This is causing some issues for tables to increase in size because of the way new dictionary gets built based on sample 1000 rows. Please see an example below where the Data Size of a compressed table increased after the tablemove.
    For table SAPR3. /BI0/ACCA_O0900, please see the below statistics.
    Old Table:
    Data Size = 10,423,576 KB
    Index Size =  9,623,776 KB
    New Table (both data and index) with Index Compression (Moved  using DB6CONV, took 1 hr for 100 million row table):
    Data Size = 16,352,352 KB
    Index Size =  4,683,296 KB
    Reorg table with reset dictionary (By DB2 CLP takes 1hr and 32 Min)
    Data Size = 8,823,776 KB
    Index Size =  4,677,792 KB
    We are on DB2 9.5 and will soon be migrating to DB2 9.7. In order to use the reclaimable table space feature that comes with DB2 9.7, We are planning on creating new tablespaces ( especially for objects like PSA/Change logs in BW) and then move the compressed tables after enabling index compression, but the DB6CONV is not going to be the right tool based on our experience.
    Is there a way for DB6CONV or DB2 to reuse an existing compressed dictionary of the source table and reuse it when it performs a table move to a new tablespace.
    Thanks,
    Raja

    hi raja,
    no, DB6CONV cannot reuse the existing compression dictionary - this is in general not possible.
    BUT:  the good news is, that the next version V5 of DB6CONV will (amongst other new features) handle compression in a much better way! like R3load and online_table_move the compression dictionary will then be created based on  (if possible) 20MB of sampled data ensuring optimal compression.
    this new version will become generally available within the next few weeks.
    regards, frank

  • Error in moving exchange 2010 mailboxes to Exchange 2013 SP1 during migration

    Dear All,
    We were running Exchange 2010 SP3 with MBX/HT/CAS role in single server. Now we are migrating our exchange 2010 to Exchange 2013 SP1. After configuring co-existence and during the mailbox move from Exchange 2010 to Exchange 2013 SP1, we are facing below
    error. Please help to troubleshot. As per our search on net, people were suggesting that it can be due to not able to resolve NetBIOS name. We checked same and we are able to ping both servers by NetBIOS and FQDN names.
    [PS] C:\Windows\system32>New-MoveRequest -Identity
    '[email protected]' -TargetDatabase "CI-DB01"
    MapiExceptionNetworkError: Unable to make connection to the server. (hr=0x80004005, ec=2423)
    Diagnostic context:
        Lid: 14744   dwParam: 0x0 Msg: EEInfo: Status: 1722
        Lid: 9624    dwParam: 0x0 Msg: EEInfo: Detection location: 323
        Lid: 13720   dwParam: 0x0 Msg: EEInfo: Flags: 0
        Lid: 11672   dwParam: 0x0 Msg: EEInfo: NumberOfParameters: 0
        Lid: 62184
        Lid: 16280   dwParam: 0x0 Msg: EEInfo: ComputerName: n/a
        Lid: 8600    dwParam: 0x0 Msg: EEInfo: ProcessID: 6004
        Lid: 12696   dwParam: 0x0 Msg: EEInfo: Generation Time: 0414-08-21T09:10:43.8970000Z
        Lid: 10648   dwParam: 0x0 Msg: EEInfo: Generating component: 18
        Lid: 14744   dwParam: 0x0 Msg: EEInfo: Status: 1237
        Lid: 9624    dwParam: 0x0 Msg: EEInfo: Detection location: 313
        Lid: 13720   dwParam: 0x0 Msg: EEInfo: Flags: 0
        Lid: 11672   dwParam: 0x0 Msg: EEInfo: NumberOfParameters: 0
        Lid: 62184
        Lid: 16280   dwParam: 0x0 Msg: EEInfo: ComputerName: n/a
        Lid: 8600    dwParam: 0x0 Msg: EEInfo: ProcessID: 6004
        Lid: 12696   dwParam: 0x0 Msg: EEInfo: Generation Time: 0414-08-21T09:10:43.8970000Z
        Lid: 10648   dwParam: 0x0 Msg: EEInfo: Generating component: 18
        Lid: 14744   dwParam: 0x0 Msg: EEInfo: Status: 10060
        Lid: 9624    dwParam: 0x0 Msg: EEInfo: Detection location: 311
        Lid: 13720   dwParam: 0x0 Msg: EEInfo: Flags: 0
        Lid: 11672   dwParam: 0x0 Msg: EEInfo: NumberOfParameters: 3
        Lid: 12952   dwParam: 0x0 Msg: EEInfo: prm[0]: Long val: 22964
        Lid: 15000   dwParam: 0x0 Msg: EEInfo: prm[1]: Pointer val: 0x0
        Lid: 15000   dwParam: 0x0 Msg: EEInfo: prm[2]: Pointer val: 0xFE01A8C000000000
        Lid: 62184
        Lid: 16280   dwParam: 0x0 Msg: EEInfo: ComputerName: n/a
        Lid: 8600    dwParam: 0x0 Msg: EEInfo: ProcessID: 6004
        Lid: 12696   dwParam: 0x0 Msg: EEInfo: Generation Time: 0414-08-21T09:10:43.8970000Z
        Lid: 10648   dwParam: 0x0 Msg: EEInfo: Generating component: 18
        Lid: 14744   dwParam: 0x0 Msg: EEInfo: Status: 10060
        Lid: 9624    dwParam: 0x0 Msg: EEInfo: Detection location: 318
        Lid: 13720   dwParam: 0x0 Msg: EEInfo: Flags: 0
        Lid: 11672   dwParam: 0x0 Msg: EEInfo: NumberOfParameters: 0
        Lid: 53361   StoreEc: 0x977
        Lid: 51859
        Lid: 33649   StoreEc: 0x977
        Lid: 43315
        Lid: 58225   StoreEc: 0x977
        Lid: 39912   StoreEc: 0x977
        Lid: 54129   StoreEc: 0x977
        Lid: 50519
        Lid: 59735   StoreEc: 0x977
        Lid: 59199
        Lid: 27356   StoreEc: 0x977
        Lid: 65279
        Lid: 52465   StoreEc: 0x977
        Lid: 60065
        Lid: 33777   StoreEc: 0x977
        Lid: 59805
        Lid: 52487   StoreEc: 0x977
        Lid: 19778
        Lid: 27970   StoreEc: 0x977
        Lid: 17730
        Lid: 25922   StoreEc: 0x977
        + CategoryInfo          : NotSpecified: (:) [New-MoveRequest], RemoteTransientException
        + FullyQualifiedErrorId : [Server=Exch01,RequestId=f6886977-92f1-4148-991b-aa76b449aff5,TimeStamp=8/21/2014 9:1
       0:43 AM] [FailureCategory=Cmdlet-RemoteTransientException] 7FCC37,Microsoft.Exchange.Management.RecipientTasks.New
    MoveRequest
      + PSComputerName        : Exch01.domain.local
    Please help as we are stuck here!!
    Thanks in advance!!

    Can yo ping server by name not by FQDN? e.g. ping server1.
    Make sure firewall/antivirus not blocking communication. Disable antivirus and try
    Thanks,
    MAS
    Please mark as helpful if you find my comment helpful or as an answer if it does answer your question. That will encourage me - and others - to take time out to help you.

  • Password reset/change during migration [Problem an...

    Hi all,
    I would like to share a password change problem I have encountered, and some remedial steps to resolve it during the migration from Yahoo Mail to BT Mail.
    Background:
    I am a relatively new customer (less than a year) and I have a ABC@btinternet BT ID and email account name. I recently received an email informing me of the pre-requirites for the migration. I have attempted to change my password since then which resulted in being effectively locked out of my account. I tried all of the usual remedies ranging from clearing cookies and browsing data, to different browsers, deep virus and malware scans, to a completley new installation of the OS (Win 7) and even using a Linux distribution.
    Problem:
    After logging in via the BT main portal email link, I attempted to change my password via the account settings page. This appeared to be successful, and a confirmation email was received via previously set up forwarded address, which would also indicate this assertion. However, after logging out, I was unable to log back in with neither the new password, nor the old. Any attempt to reset using the lost password wizard resulted in the same initial indication of success, but ultimate failure to log in. This is rather counter intuitive.
    After several hours of live chat and telephone conversations with the help department, it was found that using the BT main page to initiate a password change for the email service was at fault.
    Solution:
    There is an alternative using the Yahoo (www.mail.yahoo.com) sign in page to initiate a password change/reset, and will work as intended. Exactly the same user experience, ie wizard completes and confirmation email, but this time the change will come into effect when logging in from the BT main page.
    [Edit]
    If you have found this after already being locked out of your account (which is very likely to be the case) simply go to the Yahoo sign in page and enter your old password. You will then be able to access your account settings to change to your new one. Similarly, a forgotten password will work from here too.
    Things to look out for:
    Some of us are under what is dubbed a "mierge journey" and this appears to be at the root of the problem. If the URL for a "Forgotten Password" contains something like
    "... /managepassword/merged_consumer_journey/forgottenpass ..."
    Then you are probably in the same situation and this advice should apply.
    Thanks go to the help team for thier tireless efforts in uncovering this issue, and please reply with any ommisions or other suggestions/situations that will help others and I'll ammend/add to this post.
    TLDR:
    Do not use the BT email sign in link when changing or restting your password.
    Do use the alternative Yahoo sign in page to change or reset your password.

    Update:
    Workaround: log in using a different client machine.
    Scary.
    It looks the same problem reported, without response, here: Apex.oracle.com SaaS Login - Must Change Password.
    jd

  • Mail-routing during migration

    Hi,
    I am working with a migration-project where we want to move all mailbox'es from an Exchange-org in domain a.com to a new Exchange-org in a new domain b.com. The users email-addresses will be the same in the new domain as in the current
    domain. My question is: How can I configure the system so that email will be delivered correctly to users in the current domain and to the same user after it is migrated to the new domain?
    Can a solution be to create a send-connector with type=internal in the current domain a.com with address space
    *.a.com and send the emails to a smart host in the new domain b.com? Will the system first try to deliver the email locally to the existing Exchange-server before it sendt it via the send-connector to the new Exchange-server if the user does
    not exist in the current domain (because the user is moved to the new domain)?
    Thor-Egil

    The simplest way to do this is to have a send connector for sending mail directly from a.com's email systems to b.com's, and vice versa.  Here are the steps I've used many times for this and it's worked every time:
    Give all addresses in the current organization a new proxy address - I normally use something like "<alias>@pre.company.com"
    Create mail conacts for all mailboxes in a.com in the b.com organization - and make sure this proxy address is the External Email Address
    Have a proxy address available for all migrated mailboxes - I use something like "<alias>@post.company.com"
    Configure a send connector in a.com that sends anything with an address of post.company.com directly to the b.com servers
    Configure a send connector in b.com that sends anything with an address of pre.company.com directly to the a.com servers
    If a mailbox is moved from a.com to b.com, in a.com, create a mail contact with an External Email Address of <alias>@post.company.com; and in b.com, make sure you change the pre.company.com address to post.company.com
    Now, let's go through what happens to inbound email:
    Before any mailboxes are moved, mail comes in and is delivered in a.com - there are no mailboxes in b.com to worry about.
    Once a mailbox is moved to b.com, if someone in a.com sends to that mailbox, the mail contact for the mailbox changes the <alias>@company.com address to <alias>@post.company.com and sends the message on to the other system
    If someone on b.com sends mail to an a.com mailbox, the opposite happens - the mail contact for the mailbox changes the <alias>@company.com address to <alias>@pre.company.com and sends the message on to the other system
    You let this run until all mailboxes have been moved from a.com to b.com.  The beauty of this solution is that at any point, you can change the MX record to point to b.com's servers and mail flow will not be impeded, and both sides can be configured
    so that it is authoritative for the company.com namespace, and it will directly handle sending delivery reports.  The bad thing is that you need to keep the directories up-to-date on each side, making sure you create new mail contacts as mailboxes are
    added - but you have that issue in any co-existence scenario.
    HTH ...

  • Database Keeps On Switching During Migration

    Hello. I am currently doing a mailbox migration from Exchange 2007 to Exchange 2013. I have 4 databases in Exchange 2013 which are A, B, C and D (each with different quotas). The databases have their copies on another Exchange server under DAG.
    During the migration, database D suddenly keeps on activating its copies on different servers every 5 minutes (or so). In the first minute, it was active on server 1. After awhile, it activates on server 2. Then back on server 1. This repeats itself until
    now.
    This only happens with database D. Migration to database D keeps failing because it cannot find the target database (due to it keep switching). Stopping and resuming is useless because it will fail again. Migration to databases A, B and C works fine.
    Does anyone have any idea what's going on?

    Hi,
    Glad to hear the good news. Thanks for your update and sharing.
    Best regards,
    Belinda Ma
    TechNet Community Support

  • Want to check whether cost center exists or not during payroll process ?

    Dear All,
    Requirement is to throw an error for an employee during payroll run if the cost center does not exist.
    I guess this may be achieved by PCR only.  There is an operation OUTWPCOSTC which can help.
    Issue is that there are parameters like VALEN and VAOFF which need to be taken care of while using OUTWPCOSTC.
    Also, when cost center is not assigned, its value in WPBP is BLANK.  How to query a BLANK value in PCRs?
    Can any help on this with an complete example?
    Regards,
    Ankit

    Hi Ankit,
    Try with PCR
    VALEN 1
    OUTWPCOSTC
    Update the PCR in sub schema COPY  Z002                  Edit basic data
    000040 WPBP                        Read Work Center/Basic Pay Data
    000050 GON                         Continue if Data is Complete
    000060 ACTIO ZQ14 GEN              Check if cost centre exits
    000070 BLOCK END                   Edit basic data
    This will help you while runing the Payroll.
    Thanks & Regards
    Vikram Mali

  • A way to reuse existing classes instead of generating the stub ones?

    Hello to all,
    I am using eclipse and weblogic 10.3 as an application server.
    I have a 1st project deployed with some exposed web services. I need to access these services from a 2nd project, so i run the clientgen ant task, which generates the client interface, along with some stub classes. These stub classes are basically a copy of the ones from the 1st project.
    My question is this:
    Is there a way to reuse the original objects that the 1st project is using, by putting the first project as a dependency on the second? Or do i have to use the generated stub classes?
    Thanks in advance! Any help is appreciated.

    hi raja,
    no, DB6CONV cannot reuse the existing compression dictionary - this is in general not possible.
    BUT:  the good news is, that the next version V5 of DB6CONV will (amongst other new features) handle compression in a much better way! like R3load and online_table_move the compression dictionary will then be created based on  (if possible) 20MB of sampled data ensuring optimal compression.
    this new version will become generally available within the next few weeks.
    regards, frank

  • Moving DIT to a new master during migration

    Hello,
    I am in the process of migrating my schema v1 database to schema v2. In the course of this migration, I would like to make a new system the master server. The existing server hosts both the user directory and the configuration directory.
    I was planning on simply exporting the existing server's DIT and importing it into the new server, however since this also includes configuration information, I am concerned that I might overwrite the configuration for the new server when I import the existing data into the new server's database.
    What is the suggested process for moving a DS master from one server to another? I didn't find any reference to this on docs.sun.com.
    Thanks,
    Bill

    That makes sense for the most part. However, I would
    like my new server to assume the role of the
    configuration master, too. If I try to use this same
    trick, will I not lose configuration information from
    one of the two servers?While doing this is possible, it is not recommended. What the best practice is for the configuration directory, is to have a configuration directory instance local to each and every messaging server, and to NOT replicate the configuration directory, and NOT have other data in the directory instance with the configuration.
    However, having said that - if you really want to maintain a central configuration directory, and transfer it from one machine to the other ... yes, it's possible, but requires a lot more help and work that I can give here. Its best for you to use your local support resources to help you through this.
    I'm assuming that when I set up the new server, it
    will get it's own configuration database, and I'll
    want to merge the configuration from the old server
    into the new server's database. You will get a new configuration directory instance only if you install it that way. You can install a new Messaging instance using an existing directory instance. Again, it's not recommended.
    In that case, perhaps
    I just need to generate an LDIF file of the
    o=NetscapeRoot suffix and import that into the new
    database?Trying to do that is risky, because you will end up merging two config trees, that may well have conflicting data.
    There used to be a 'merge configuration' option in the console the 5.x messaging servers used, that may well still be there.
    BTW, I assume that I will also have to run the
    directory preparation tool on the new server before
    attempting to create the MMR replication agreement?Yes, you will. That is required to load the necessary objectclass and attribute definitions, as well as set up databases and indexes.

Maybe you are looking for

  • Free goods issue from vendor consignmemtn stock

    Hai Friends, I am doing here delivery thru vendor consignment stock. When i do the delivery  with 601 k  movement type. for that quantity vendor liability will be booked thru MRKO Transaction.  this process is going smoothly.  But i got problem in Fr

  • Deploying Adobe Reader 9.1.2 and 9.1.3 silently using switches in Langard

    I am having problems deploying the patches to Adobe Reader 9.1.0 using Langard 9.0 Build 20090709.  I deployed 9.1.0 without any issues using this switch: sAll /rs /l /msi" /qb-! /norestart ALLUSERS=1 EULA_ACCEPT=YES SUPPRESS_APP_LAUNCH=YES" I have t

  • Printing Documents through the Default Printer

    I have come across a problem, where it is getting difficult for me to enable my swings application to print a paper through the default printer which is installed. What should i do in this regard?

  • Script for exporting all stories to one RTF file

    Hi, I've been searching for this as I know it's been asked before, but I haven't had any luck finding it yet: Wasn't there a script that someone developed which would allow you to export all stories from an InDesign document into a single RTF file? O

  • Searching in the itunes store

    Have set up my iTunes account on a new computer, and now everytime i search the ITunes store for certain bands, I only seem to get tribute acts returning on my search criteria. Does anyone know why this may be? thanks