Shared datasource during migration
hi, just wanted to check something, even when I think it will not be a problem...
just started a migration from sccm 2007 to 2012 R2.
we will be running both environments in parallel for a while.
in the 2007 environment, all content sources had been placed on a separate fileshare (not dfs)
I consider I can just leave it there and that this will not be a problem for either environments?
More info:
Introduction to Migration in System Center 2012 Configuration Manager
http://technet.microsoft.com/en-us/library/gg699364.aspx#BKMK_MigrationScenarios
We
are trying to better understand customer views on social support experience, so your participation in this
interview project would be greatly appreciated if you have time.
Thanks for helping make community forums a great place.
Similar Messages
-
Error while activating the datasource during migration
Hi Guys,
When we were trying to migrate the datasources from 3.x to BI 7.0, we are encountering an error with the activation of the datasource. The datasource itself has been migrated but the ativation log gives and error
Error in activating the datasourc----RSO404
Has anyone encountered this issue? Your input is highly appreciated.
Regards,
DonivVoodi,
Thanks for the response but the note talks about deletion of the DTP causing the errors. Moreover we are already on SP 13. Any other ideas?
Doniv -
Slow migration rates for shared-nothing live migration over teaming NICs
I'm trying to increase the migration/data transfer rates for shared-nothing live migrations (i.e., especially the storage migration part of the live migration) between two Hyper-V hosts. Both of these hosts have a dedicated teaming interface (switch-independent,
dynamic) with two 1GBit/s NICs which is used for only for management and transfers. Both of the NICs for both hosts have RSS enabled (and configured), and the teaming interface also shows RSS enabled, as does the corresponding output from Get-SmbMultichannelConnection).
I'm currently unable to see data transfers of the physical volume of more than around 600-700 MBit/s, even though the team is able to saturate both interfaces with data rates going close to the 2GBit/s boundary when transferring simple files over SMB. The
storage migration seems to use multichannel SMB, as I am able to see several connections all transferring data on the remote end.
As I'm not seeing any form of resource saturation (neither the NIC/team is full, nor is a CPU, nor is the storage adapter on either end), I'm slightly stumped that live migration seems to have a built-in limit to 700 MBit/s, even over a (pretty much) dedicated
interface which can handle more traffic when transferring simple files. Is this a known limitation wrt. teaming and shared-nothing live migrations?
Thanks for any insights and for any hints where to look further!Compression is not configured on the live migrations (but rather it's set to SMB), but as far as I understand, for the storage migration part of the shared-nothing live migration this is not relevant anyway.
Yes, all NICs and drivers are at their latest version, and RSS is configured (as also stated by the corresponding output from Get-SmbMultichannelConnection, which recognizes RSS on both ends of the connection), and for all NICs bound to the team, Jumbo Frames
(9k) have been enabled and the team is also identified with 9k support (as shown by Get-NetIPInterface).
As the interface is dedicated to migrations and management only (i.e., the corresponding Team is not bound to a Hyper-V Switch, but rather is just a "normal" Team with IP configuration), Hyper-V port does not make a difference here, as there are
no VMs to bind to interfaces on the outbound NIC but just traffic from the Hyper-V base system.
Finally, there are no bandwidth weights and/or QoS rules for the migration traffic bound to the corresponding interface(s).
As I'm able to transfer close to 2GBit/s SMB traffic over the interface (using just a plain file copy), I'm wondering why the SMB(?) transfer of the disk volume during shared-nothing live migration is seemingly limited to somewhere around 700 MBit/s on the
team; looking at the TCP-connections on the remote host, it does seem to use multichannel SMB to copy the disk, but I might be mistaken on that.
Are there any further hints or is there any further information I might offer to diagnose this? I'm currently pretty much stumped on where to go on looking. -
Shared nothing live migration over SMB. Poor performance
Hi,
I´m experiencing really poor performance when migrating VMs between newly installed server 2012 R2 Hyper-V hosts.
Hardware:
Dell M620 blades
256Gb RAM
2*8C Intel E5-2680 CPUs
Samsung 840 Pro 512Gb SSD running in Raid1
6* Intel X520 10G NICs connected to Force10 MXL enclosure switches
The NICs are using the latest drivers from Intel (18.7) and firmware version 14.5.9
The OS installation is pretty clean. Its Windows Server 2012 R2 + Dell OMSA + Dell HIT 4.6
I have removed the NIC teams and vmSwitch/vNICs to simplify the problem solving process. Now there is one nic configured with one IP. RSS is enabled, no VMQ.
The graphs are from 4 tests.
Test 1 and 2 are nttcp-tests to establish that the network is working as expected.
Test 3 is a shared nothing live migration of a live VM over SMB
Test 4 is a storage migration of the same VM when shut down. The VMs is transferred using BITS over http.
It´s obvious that the network and NICs can push a lot of data. Test 2 had a throughput of 1130MB/s (9Gb/s) using 4 threads. The disks can handle a lot more than 74MB/s, proven by test 4.
While the graph above doesn´t show the cpu load, I have verified that no cpu core was even close to running at 100% during test 3 and 4.
Any ideas?
Test
Config
Vmswitch
RSS
VMQ
Live Migration Config
Throughput (MB/s)
NTtcp
NTttcp.exe -r -m 1,*,172.17.20.45 -t 30
No
Yes
No
N/A
500
NTtcp
NTttcp.exe -r -m 4,*,172.17.20.45 -t 30
No
Yes
No
N/A
1130
Shared nothing live migration
Online VM, 8GB disk, 2GB RAM. Migrated from host 1 -> host2
No
Yes
No
Kerberos, Use SMB, any available net
74
Storage migration
Offline VM, 8Gb disk. Migrated from host 1 -> host2
No
Yes
No
Unencrypted BITS transfer
350Hi Per Kjellkvist,
Please try to change the "advanced features"settings of "live migrations" in "Hyper-v settings" , select "Compression" in "performance options" area .
Then test 3 and 4 .
Best Regards
Elton Ji
We
are trying to better understand customer views on social support experience, so your participation in this
interview project would be greatly appreciated if you have time.
Thanks for helping make community forums a great place. -
Does a migrate interrupts VM during migration
Couple of years ago I did a migration of VM on 2008 host with VMM2008. I don't remember if there is an interruption during migration.
I need to perform one time backup of live VM. I cannot stop this machine for making an export right now.
So need a confirmation of MIGRATE action. Worst case scenario I can do a backup later with the EXPORT option.
VM on 2008. VMM2008 available.
Thx.
"When you hit a wrong note it's the next note that makes it good or bad". Miles DavisIt depends on your server OS.
If you're running Windows Server 2008 R2 and later then you have live migration available which allows you to migrate without any downtime. Live migration was introduced in 2008 R2, so it isn't available in 2008 RTM. Quick Migration is available
in 2008 RTM, however that does require some downtime. In either case I believe you need to be using a failover cluster to use it.
If you're running Windows Server 2012 then things are a LOT easier, as you now have the use of Shared Nothing Live Migration, which essentially allows you to migrate the server with no downtime, and between two physical servers, without the need for them
to be part of a cluster. Just two boxes sat in a rack on a network, switching a VM from one to the other. -
Hyper-V replica vs Shared Nothing Live Migration
Shared Nothing Live Migration allows to transport your VM over the WAN without shutting it down (how much time it takes on io intensive vm is another story)
Hyper-V replica does not allow to perform the DR switch without shutdown operation on primary site VM !
why can't it take the VM live to the DR ?
that's because if we use Shared Nothing across the WAN, we don't use the data that Hyper-V replica can and then it also breaks everything hyper-V replica does.
Point is: how to take the VM to DR in running state, what is the best way to do that ?
Shahid RoofiHi Shahid,
Hyper-V Replica is designed as a DR technology, not as a technique to move VMs. It assumes that should you require it, the source VM would probably be offline and therefore you would be powering up the passive copy from a previous point in time
as its not a true synchronous replica copy. It does give you the added benefit to be able to run a planned failover which as you say, powers of the VM first, runs a final Sync then powers the new VM up. Obviously you cant have the duplicate copy of this VM
running all the time at the remote site, otherwise you would have a split brain situation for network traffic.
Like the live migration the shared nothing live migration is a technology aimed at moving a VM, but as you know offers the ability to do this without having shared storage and only requires a network connection. When initiated moves the whole
VM, well copies the virtual drive and memory before sending machine writes to both, then only to the new VM when they both match. With regards to the speed, I assume you have the SNLM setup to compress data before sending across the wire?
If you want a true live migration between remote sites, one way would be to have a SAN array between both sites synchronously replicating data, then stretch the Hyper-V cluster across both sites. Obviously this is a very expensive solution but perhaps
the perfect scenario.
Kind Regards
Michael Coutanche
Blog:
Twitter: LinkedIn:
Note: Posts are provided “AS IS” without warranty of any kind, either expressed or implied, including but not limited to the implied warranties of merchantability and/or fitness for a particular purpose. -
OOF not working for mailboxes homed on Exchange 2010 server during migration
Hi There
I'm troubleshooting an issue during a exchange 2010 to exchange 2013 migration where the OOF is not working for users homed on the exchange 2010 mailbox servers. This applies to users using outlook 2010 and outlook 2013 .OWA works perfectly and you can also
enable OOF via exchange powershell.
Users homed on the exchange 2013 servers does not have any issues.
The exchange 2010 servers is on SP3 rollup 4 and the 2013 servers on SP1 only without any rollups installed.
So my question will be - have anyone experienced this issue during migration and will patching the exchange 2010 servers to rollup 5 or 6 and the exchange 2013 to sp1 rollup 5 fix the OOF issue then? If requested the client to patch to these levels, but
in the meantime I'm checking if anyone else had similar issues.
Any pointers will be welcome.Hi ,
1.Please check the problematic exchange 2010 user account can able to access the ews virtual directory internal url and you can check that via browser's.It will prompt for the user name and password , please provide that .
EWS url wopuld be like - https://mail.youdomain.com/EWS/Exchange.asmx
2.Then the second step would be autodiscover check for the problematic user account.you can find that via test email configuration .There you came to know whether outlook is fetching up the proper url for ews and autodiscover.
3.Then please un check the proxy settings in browsers and check the same.
4.please use the command to check the auto discover
test-outlookwebservices
Please check the below links also.
http://exchangeserverpro.com/exchange-2013-test-outlook-web-service/
http://www.proexchange.be/blogs/exchange2007/archive/2009/07/14/your-out-of-office-settings-cannot-be-displayed-because-the-server-is-currently-unavailable-try-again-later.aspx
http://port25guy.com/2013/07/20/user-cannot-set-an-out-of-office-in-exchange-20102013-or-eventid-3004-appears-in-your-event-log/
http://www.theemailadmin.com/2010/06/exchange-server-2010-out-of-office/
Regards
S.Nithyanandham
Thanks S.Nithyanandham -
Shared Datasource Is Not Valid Error
Using Reporting Services (SQL Server 2008 R2), I have a VS SSRS project with 23 reports, all running off the same Shared Data Source. They all execute in VS. I have deployed the datasource, datasets & reports to server. From the Report
Manager, I can execute all reports except 4, which have the same error:
The report server cannot process the report or shared dataset. The shared data source '22d600c2-a3d5-4178-8a39-d05361607ebd' for the report server or SharePoint site is not valid. Browse to the server or site and select a shared data source. (rsInvalidDataSourceReference)
I have tried the following, without success:
1)verified that the datasource exists in the Data Sources folder, and is Enabled
2) Checked and reset the Shared Datasource to point to the deployed datasource in the Data Sources folder
3) Recreated & redeployed the datasource
4) Deleted the 4 reports on the server and re-deployed
5) Deleted the reports, used Upload instead of Deploy, then set the Shared Datasource to point to the one on the server
6) Verified that the Link column in ReportServer.DataSource is NOT NULL. Actually have the same Link as the other 19 reports. Verified that the GUID in the LinkID exists in Catalog table and is the id for the Datasource.
No matter what I have tried, for some reason, the GUID in these failing reports appear to get set to '22d600c2-a3d5-4178-8a39-d05361607ebd', which does not exist in Catalog table. The other reports never fail, these reports never execute. Anyone
have any ideas?
Victor
As I recall, I went into the Report server db and updated the Catalog table. Open the table and you'll find your reports listed with a link to their datasource (in the DataSource table). I found the 4 reports that failed had a different value
in the LinkID column than the reports that executed, even though they use the same shared datasource.
So I copied the good LinkID value (a GUID) into the 4 reports that failed, and then they ran fine. For some unknown reason, there was an invalid value stuck in the LinkID column which pointed to a datasource that really did not exist.
Go figure.
Victor
Victor Kushdilian -
Shared datasource using stored credentials still keeps prompting for password
We have a reporting services running in sharepoint 2013 integrated mode.We have deployed a shared datasource and used a stored credentials to connect to an oracle database. However, whenever i try to build a report and use this report datasource, it will
display a dialog box and prompt for the stored credentials password, and once the password has been entered, we can succeessfully connect to the oracle db and build our report. For security reasons, we can't divulge the password to the users.
Is there anyway where reporting service will not prompt for the stored credentials password whenever we use this data source to build our report. We have already prepopulated the password in the data source definition page. Please helpIn the Shared Data Source, where you configured the name and password, there are two boxes that could be ticked:
1. If the account is a Windows domain user account, specify it in this format: <domain>\<account>, and then select
Use as Windows credentials when connecting to the data source.
2. If the user name and password are database credentials, do not select
Use as Windows credentials when connecting to the data source. If the database server supports impersonation or delegation, you can select
Impersonate the authenticated user after a connection has been made to the data source.
I'm not sure if Oracle allows for Windows authentication. You need to tick one of these two boxes for the password to be passed through to the database.
Cheers,
Martina White -
Error during migration of ADF project from Jdev 12 to jdev 11.1.1.7
Hi all,
I have created a ADF project in Jdeveloper 12c.during migration from 12c to jdev11g everything was normal.but when i tried to deploy it over integrated weblogic 11g of jdeveloper,it created error-
9 Sep, 2014 8:56:39 PM IST> <Error> <J2EE> <BEA-160197> <Unable to load descriptor C:\Users\Mayank\AppData\Roaming\JDeveloper\system11.1.1.7.40.64.93\o.j2ee\drs\ADF_FUSION_POC/META-INF/weblogic-application.xml of module ADF_FUSION_POC. The error is weblogic.descriptor.DescriptorException: Unmarshaller failed
at weblogic.descriptor.internal.MarshallerFactory$1.createDescriptor(MarshallerFactory.java:161)
at weblogic.descriptor.BasicDescriptorManager.createDescriptor(BasicDescriptorManager.java:323)
at weblogic.application.descriptor.AbstractDescriptorLoader2.getDescriptorBeanFromReader(AbstractDescriptorLoader2.java:788)
at weblogic.application.descriptor.AbstractDescriptorLoader2.createDescriptorBean(AbstractDescriptorLoader2.java:409)
at weblogic.application.descriptor.AbstractDescriptorLoader2.loadDescriptorBeanWithoutPlan(AbstractDescriptorLoader2.java:759)
at weblogic.application.descriptor.AbstractDescriptorLoader2.loadDescriptorBean(AbstractDescriptorLoader2.java:768)
at weblogic.application.ApplicationDescriptor.getWeblogicApplicationDescriptor(ApplicationDescriptor.java:324)
at weblogic.application.internal.EarDeploymentFactory.findOrCreateComponentMBeans(EarDeploymentFactory.java:181)
at weblogic.application.internal.MBeanFactoryImpl.findOrCreateComponentMBeans(MBeanFactoryImpl.java:48)
at weblogic.application.internal.MBeanFactoryImpl.createComponentMBeans(MBeanFactoryImpl.java:110)
at weblogic.application.internal.MBeanFactoryImpl.initializeMBeans(MBeanFactoryImpl.java:76)
at weblogic.management.deploy.internal.MBeanConverter.createApplicationMBean(MBeanConverter.java:89)
at weblogic.management.deploy.internal.MBeanConverter.createApplicationForAppDeployment(MBeanConverter.java:67)
at weblogic.management.deploy.internal.MBeanConverter.setupNew81MBean(MBeanConverter.java:315)
at weblogic.deploy.internal.targetserver.operations.ActivateOperation.compatibilityProcessor(ActivateOperation.java:81)
at weblogic.deploy.internal.targetserver.operations.AbstractOperation.setupPrepare(AbstractOperation.java:295)
at weblogic.deploy.internal.targetserver.operations.ActivateOperation.doPrepare(ActivateOperation.java:97)
at weblogic.deploy.internal.targetserver.operations.AbstractOperation.prepare(AbstractOperation.java:217)
at weblogic.deploy.internal.targetserver.DeploymentManager.handleDeploymentPrepare(DeploymentManager.java:747)
at weblogic.deploy.internal.targetserver.DeploymentManager.prepareDeploymentList(DeploymentManager.java:1216)
at weblogic.deploy.internal.targetserver.DeploymentManager.handlePrepare(DeploymentManager.java:250)
at weblogic.deploy.internal.targetserver.DeploymentServiceDispatcher.prepare(DeploymentServiceDispatcher.java:159)
at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer.doPrepareCallback(DeploymentReceiverCallbackDeliverer.java:171)
at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer.access$000(DeploymentReceiverCallbackDeliverer.java:13)
at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer$1.run(DeploymentReceiverCallbackDeliverer.java:46)
at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:528)
at weblogic.work.ExecuteThread.execute(ExecuteThread.java:209)
at weblogic.work.ExecuteThread.run(ExecuteThread.java:178)
Caused by: com.bea.xml.XmlException: weblogic.descriptor.BeanAlreadyExistsException: Bean already exists: "weblogic.j2ee.descriptor.wl.LibraryRefBeanImpl@874a85bf(/LibraryRefs[[CompoundKey: adf.oracle.domain]])"
at com.bea.staxb.runtime.internal.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:54)
at com.bea.staxb.runtime.internal.RuntimeBindingType$BeanRuntimeProperty.setValue(RuntimeBindingType.java:539)
at com.bea.staxb.runtime.internal.AttributeRuntimeBindingType$QNameRuntimeProperty.fillCollection(AttributeRuntimeBindingType.java:381)
at com.bea.staxb.runtime.internal.MultiIntermediary.getFinalValue(MultiIntermediary.java:52)
at com.bea.staxb.runtime.internal.AttributeRuntimeBindingType.getFinalObjectFromIntermediary(AttributeRuntimeBindingType.java:140)
at com.bea.staxb.runtime.internal.UnmarshalResult.unmarshalBindingType(UnmarshalResult.java:200)
at com.bea.staxb.runtime.internal.UnmarshalResult.unmarshalDocument(UnmarshalResult.java:169)
at com.bea.staxb.runtime.internal.UnmarshallerImpl.unmarshal(UnmarshallerImpl.java:65)
at weblogic.descriptor.internal.MarshallerFactory$1.createDescriptor(MarshallerFactory.java:150)
... 27 more
Caused by: weblogic.descriptor.BeanAlreadyExistsException: Bean already exists: "weblogic.j2ee.descriptor.wl.LibraryRefBeanImpl@874a85bf(/LibraryRefs[[CompoundKey: adf.oracle.domain]])"
at weblogic.descriptor.internal.ReferenceManager.registerBean(ReferenceManager.java:231)
at weblogic.j2ee.descriptor.wl.WeblogicApplicationBeanImpl.setLibraryRefs(WeblogicApplicationBeanImpl.java:1440)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at com.bea.staxb.runtime.internal.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:48)
... 35 more
.>
<9 Sep, 2014 8:56:39 PM IST> <Error> <Deployer> <BEA-149605> <Failed to create App/Comp mbeans for AppDeploymentMBean ADF_FUSION_POC. Error - weblogic.management.DeploymentException: Unmarshaller failed.
weblogic.management.DeploymentException: Unmarshaller failed
at weblogic.application.internal.EarDeploymentFactory.findOrCreateComponentMBeans(EarDeploymentFactory.java:193)
at weblogic.application.internal.MBeanFactoryImpl.findOrCreateComponentMBeans(MBeanFactoryImpl.java:48)
at weblogic.application.internal.MBeanFactoryImpl.createComponentMBeans(MBeanFactoryImpl.java:110)
at weblogic.application.internal.MBeanFactoryImpl.initializeMBeans(MBeanFactoryImpl.java:76)
at weblogic.management.deploy.internal.MBeanConverter.createApplicationMBean(MBeanConverter.java:89)
Truncated. see log file for complete stacktrace
Caused By: weblogic.descriptor.BeanAlreadyExistsException: Bean already exists: "weblogic.j2ee.descriptor.wl.LibraryRefBeanImpl@874a85bf(/LibraryRefs[[CompoundKey: adf.oracle.domain]])"
at weblogic.descriptor.internal.ReferenceManager.registerBean(ReferenceManager.java:231)
at weblogic.j2ee.descriptor.wl.WeblogicApplicationBeanImpl.setLibraryRefs(WeblogicApplicationBeanImpl.java:1440)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
Truncated. see log file for complete stacktrace
>
<9 Sep, 2014 8:56:39 PM IST> <Error> <Deployer> <BEA-149265> <Failure occurred in the execution of deployment request with ID '1410276398843' for task '0'. Error is: 'weblogic.management.DeploymentException: Unmarshaller failed'
weblogic.management.DeploymentException: Unmarshaller failed
at weblogic.application.internal.EarDeploymentFactory.findOrCreateComponentMBeans(EarDeploymentFactory.java:193)
at weblogic.application.internal.MBeanFactoryImpl.findOrCreateComponentMBeans(MBeanFactoryImpl.java:48)
at weblogic.application.internal.MBeanFactoryImpl.createComponentMBeans(MBeanFactoryImpl.java:110)
at weblogic.application.internal.MBeanFactoryImpl.initializeMBeans(MBeanFactoryImpl.java:76)
at weblogic.management.deploy.internal.MBeanConverter.createApplicationMBean(MBeanConverter.java:89)
Truncated. see log file for complete stacktrace
Caused By: weblogic.descriptor.BeanAlreadyExistsException: Bean already exists: "weblogic.j2ee.descriptor.wl.LibraryRefBeanImpl@874a85bf(/LibraryRefs[[CompoundKey: adf.oracle.domain]])"
at weblogic.descriptor.internal.ReferenceManager.registerBean(ReferenceManager.java:231)
at weblogic.j2ee.descriptor.wl.WeblogicApplicationBeanImpl.setLibraryRefs(WeblogicApplicationBeanImpl.java:1440)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
Truncated. see log file for complete stacktrace
>
<9 Sep, 2014 8:56:39 PM IST> <Warning> <Deployer> <BEA-149004> <Failures were detected while initiating deploy task for application 'ADF_FUSION_POC'.>
<9 Sep, 2014 8:56:39 PM IST> <Warning> <Deployer> <BEA-149078> <Stack trace for message 149004
weblogic.management.DeploymentException: Unmarshaller failed
at weblogic.application.internal.EarDeploymentFactory.findOrCreateComponentMBeans(EarDeploymentFactory.java:193)
at weblogic.application.internal.MBeanFactoryImpl.findOrCreateComponentMBeans(MBeanFactoryImpl.java:48)
at weblogic.application.internal.MBeanFactoryImpl.createComponentMBeans(MBeanFactoryImpl.java:110)
at weblogic.application.internal.MBeanFactoryImpl.initializeMBeans(MBeanFactoryImpl.java:76)
at weblogic.management.deploy.internal.MBeanConverter.createApplicationMBean(MBeanConverter.java:89)
Truncated. see log file for complete stacktrace
Caused By: weblogic.descriptor.BeanAlreadyExistsException: Bean already exists: "weblogic.j2ee.descriptor.wl.LibraryRefBeanImpl@874a85bf(/LibraryRefs[[CompoundKey: adf.oracle.domain]])"
at weblogic.descriptor.internal.ReferenceManager.registerBean(ReferenceManager.java:231)
at weblogic.j2ee.descriptor.wl.WeblogicApplicationBeanImpl.setLibraryRefs(WeblogicApplicationBeanImpl.java:1440)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
Truncated. see log file for complete stacktrace
>
#### Cannot run application ADF_FUSION_POC due to error deploying to IntegratedWebLogicServer.
[08:56:39 PM] #### Deployment incomplete. ####
[08:56:39 PM] Remote deployment failed (oracle.jdevimpl.deploy.common.Jsr88RemoteDeployer)
[Application ADF_FUSION_POC stopped and undeployed from Server Instance IntegratedWebLogicServer]
<Logger> <error> ServletContainerAdapter manager not initialized correctly.
how to solve that error.ThanxHi Shakif,
Ideally it should work and should be able to see in Changelist of target system.
Could you tell me exact error?
Thanks
Satish -
Shared nothing live migration wrong network
Hey,
I have a problem when doing shared nothing live migration between clusters using SCVMM 2012 R2.
The transfer goes ok. But It choose to use my team of 2 x 1GB nic (routable) instead of the team with 2x10GB (non routable, but opened between all hosts).
I have set vmmigrationsubnet to correct net. Check live migration setting, and that port 6600 is listening on the correct IP.
But still it chooses to use the 1GB net.
Anyone got any idea what I can do next?
//Johan RunessonDo you have only your live migration network defined as such on the clusters or do you have both defined? What are the IP networks on the live migration networks on each cluster?
.:|:.:|:. tim -
Now that SSDT BI has been released for 2014 for a while, is it possible for a single shared DataSource to be referenced/used across SSRS Projects? If not, why has this functionality been added yet? With all the power and functionality of Visual
Studio 2013 and TFS and the .NET Framework, I still find myself developing with one hand tied behind my back when developing SSRS.
The laundry list at this point includes:
There is no way for projects to reference one another to share common functionality
The Folder structure is flat which just causes clutter when projects start to grow more than 15 files
The core rs.exe deployment logic requires VB.NET when the majority of .NET development is done on C#
The version of .NET supported through RS.exe doesn't appear to be any greater than 2.0
The WebServer/Services are still running on the .NET 3.5 Framework
When deploying through SSDT, there is no way to indicate if the directories should be hidden or descriptions be added to reportsHi Chritopher,
Just as you said, a shared data source cannot be used across projects in SQL Server Data Tools. It is a set of data source connection properties that can be referenced by multiple reports, models, and data-driven subscriptions that run on a Reporting Services
report server.
And the script must be written in Visual Basic .NET code, and stored in a Unicode or UTF-8 text file with an .rss file name extension when using rs.exe utility processes it. Other features you post are still not updated.
Personally, I understand your feeling and how frustrated when you find this issue. So, it is my pleasure to help you to reflect your recommendation to the proper department for their consideration. Please feel free to submit your situation on our product
to the following link:
https://connect.microsoft.com/SQLServer/Feedback. Your feedback is valuable for us to improve our products and increase the level of service provided.
Thank you for your understanding.
Regards,
Katherine Xiong
Katherine Xiong
TechNet Community Support -
Why do shared datasources have to reside with the project?
Background:
SQL SERVER 2008 r2
Visual Studio 2008
I inherited a mess that I'm trying to clean up. We have 8 projects for a single application and each one of these projects has its own solution. I'm trying to create a single solution that holds all 8 projects with each project in it's own folder.
I'm able to add all 8 projects to a single solution in Visual Studio but the problem I'm having is with shared datasources and datasets. Every one of these projects is using a single datasource pointing to the exact same database. When I
add the shared datasource to the project it copies it to the project folder. I end up with the exact same datasource in 8 different folders. The same holds true for the datasets.
The report server as a separate folder for the shared datasources and another for the shared datasets. The reports are in their own folders. How can I replicate this hierarchy in Visual Studio?Hi RayMajewski,
Per my understanding that you have 8 projects and you want to move these 8 projects under a single solution, but now you have some issue with the shared data sources and datasets, right?
As you know, by default when you create to add an new project, there will be three default folders under the project, if each 8 project have its own shared data sources and shared Datasets, when move to one solution, the shared data sources and
shared Datasets will keep together under the project.
Please reference to details information below to make sure you have move the project correctly.
Copy the 8 projects to the solution folder in the local disk: .\\Visual Studio 2008\Projects\Solution2.
Click the file and select the Open to open "Solution/Project" to add these project to the solution one by one:
You will find the data sources and datasets move together under the project like below:
If what you want is to replicate this hierarchy in Visual Studio outside the project, currently we couldn't find any good method to do this and it is not support to keep them outside of the current project. If all the 8 project point to the same database,
you can just copy the share data source from one project to another project to create the same shared data source in another project.
If you still have any problem, please feel free to ask.
Regards,
Vicky Liu
If you have any feedback on our support, please click
here.
Vicky Liu
TechNet Community Support -
Win 2012 shared-nothing live migration slow performance
Win 2012 shared-nothing live migration slow performance
Hello,
i have two standalone hyper-v servers - A) 2012 and B) 2012 r2.
I tried live migration on non-shared storage between them. The functionality is alright but performance is low. The copying take a long time with max 200 Mbps.
I am not able to find the bottleneck. Network a disk performance seems to be good. It is 1Gbps network and when I tried simple copy paste the vhd file from A to B through cifs protocol, the speed was about almost full 1Gbps - it was nice and fast.
Is it feature? I am not able reach full network performance with shared-nothing live migration.
Thank you for reply
sincerely
Peter WeissHi,
I don’t found the similar issue with Hyper-V, Does both of your hosts have the chipsets in the same family? Could you try to switch the three Live Migration Performance
Option method then monitor again? Or It seems there may have some disk or file system performance issue, please try to update your RAID card firmware to the least version.
More information:
Shared Nothing Live Migration: Goodbye Shared Storage?
http://blogs.technet.com/b/canitpro/archive/2013/01/03/shared-nothing-live-migration-goodbye-shared-storage.aspx
How to: Copy very large files across a slow or unreliable network
http://blogs.msdn.com/b/granth/archive/2010/05/10/how-to-copy-very-large-files-across-a-slow-or-unreliable-network.aspx
The similar thread:
How to determine when to use the xcopy with the /j parameter?
http://social.technet.microsoft.com/Forums/en-US/5ebfc25a-41c8-4d82-a2a6-d0f15d298e90/how-to-determine-when-to-use-the-xcopy-with-the-j-parameter?forum=winserverfiles
Hope this helps -
Database Keeps On Switching During Migration
Hello. I am currently doing a mailbox migration from Exchange 2007 to Exchange 2013. I have 4 databases in Exchange 2013 which are A, B, C and D (each with different quotas). The databases have their copies on another Exchange server under DAG.
During the migration, database D suddenly keeps on activating its copies on different servers every 5 minutes (or so). In the first minute, it was active on server 1. After awhile, it activates on server 2. Then back on server 1. This repeats itself until
now.
This only happens with database D. Migration to database D keeps failing because it cannot find the target database (due to it keep switching). Stopping and resuming is useless because it will fail again. Migration to databases A, B and C works fine.
Does anyone have any idea what's going on?Hi,
Glad to hear the good news. Thanks for your update and sharing.
Best regards,
Belinda Ma
TechNet Community Support
Maybe you are looking for
-
How does one use the free version of OmniGraffle without instructions?
I found OmniGraffle loaded onto my new PBk G4 (apparently for free) and, after trying to use it, have a few questions. 1. Is there an instruction booklet/program for the supplied version besides what little is included in its Help menu? 2. It came as
-
Site lost activation out of nowhere - How do I find and enter the activation key through VPN?
I had a message that site comms went down. My clients sent somebody on-site and all is working but the OnPlus agent still shows comms down. I VPN'd in and can get to the web page for the unit but it's asking me to activate it. Is there a way to fi
-
Keep getting "renderable text" error when I need to OCR PDF's from FrameMaker.
My solution has been to individually extract all those pages, then open them up in Photoshop, flatten them and widen the canvas size to standard 8.5 x 11. But that's a little tedious and time-consuming and you have to delete the original page from yo
-
Hi, We are working with JDeveloper 11.1.1.3.0. We have a database table that includes a column of XMLTYPE, and we have created an EO based on that table and the XMLTYPE attribute becomes oracle.job.domain.ClobDomain. We need to be able to display the
-
Why SYS cannot be the owner of the recovery catalog
my question is why? i know SYS cannot be the owner of the recovery catalog as per oracle documentations and books. why SYS cannot be the owner of the recovery catalog?