Moving SHP2010 from SQL2014 to SQL2012 server

So I have the following problem - my DBA's upgraded the test SQL machines with SQL2014 including the one where the development and testing Sharepoint 2010 farm is. Normally this might not be a big thing but we have invested heavily in a BI solution using
both native SQL2012 and Sharepoint SQL Reporting Services. And when upgrading the Sharepoint RS stopped working since the backend SQL was now SQL2014. After upgrading the Sharepoint RS to SQL2014 the native RS stopped working because of conflicting assemblies.
This, and the fact that the production environment is SQL2012 makes me want to move the Sharepoint 2010 DEV & TEST environment to a new SQL2012 server. Sounds easy right?
Well no matter what method I try to move the SQL2014 databases to the SQL2012 it always fails. Detach/attach, database backup, generating script, export database, everything fails for one reason or another. Either the database is too big to dump to file
("out of memory") or the table structures won't allow me to use the database (getting alot of version columns that are "calculated"), nothing works. ANd yes the DB's are compatbility mode 2012. DPM won't restore it to SQL2012 and I can't
invest tons of money in a third party solution like DocAve to perform a one time farm backup/restore.
So my last hope of using "built in" functions for this is to use the backup-spfarm & restore-spfarm. I've read Microsofts articles about them and it clearly states that it shouldn't be used to move CONFIGURATION & CENTRAL ADMIN databases.
According to blog posts I have to disconnect from the farm, then create a new farm and do a restore-spfarm?
Does anyone have experience in doing this? Can I restore and overwrite everything, including configuration, afterwards? The farm is configured very specifically with BI in mind with services running with specific accounts with SPNs and everything so would
really not want to redo everything! Or does anyone have any other way of moving a SQL2012 database from a SQL 2014 server to a SQL2012 server that I haven't thought of?
Regards // Kristoffer

Thats true until 10g. They added a convert into the RMAN utility so that you can move the tablespaces over. See
http://download-west.oracle.com/docs/cd/B14117_01/server.101/b10739/tspaces.htm#sthref1249

Similar Messages

  • Help needed with moving mail from SuSE to X Server

    I just bought a Mac Mini (used) and OSX Server 10.4. I installed the software, setup my mailserver and can send and receive. I shutdown the OSX mailserver, copied my /var/spool/imap/user/tom from my SuSE Linux server to my OSX server (both are running cyrus), chown cyrusimap:mail on all the files, then ran reconstruct via the server admin. While the emails showed up on my email client, they didn't show up correctly nor could I see what was inside them.
    What did I do wrong? I'm sure a lot of things.
    I had my office mailserver crash a couple months ago and moved my backup to my new SuSE server and got everything working, but I can't remember how I did it....panic modes cause memory loss.
    Can anyone suggest a way to move email over?
    Additionally, after I moved the email over, I wasn't able to get into Squirrelmail, but was able to access the mailbox via mail client. Squirrelmail reported wrong user or password.
    Thanks,
    Tom

    If I understood correctly, you only moved the spool/partition, but not the cyrus database.
    Chances are your imap/cyrus configuration was different on the SUSE box. Thinks like BerkeleyDB vs. Skipstamp etc.
    A few assumptions:
    -You didn't move over imapd.conf and cyrus.conf (which is best not to)
    -You created a mail enabled user on the Tiger box with the first shortname being tom
    If my assumptions are correct, then the first thing I would try is a complete reconstruction, not just re-indexing. The simplest way, would be to download mailbfr, install it and issue:
    mailbfr -o
    mailbfr -f
    mailbfr is available here: http://osx.topicdesk.com/content/category/4/17/80/
    Just fyi, you will loose your read flags.
    Alternatively, keep the SUSE box running and use an IMAP client or script to copy mail over to the Tiger Server.
    HTH,
    Alex

  • Migration of AlwaysON database from WSFC 2008 + SQL2012 to WSFC 2012 + SQL2014

    Hello All,
    I have a 6 node GEO cluster, 4 nodes in primary site + 2 nodes in secondary site. In the primary node 03 and 04 has 2 pair of SQL instances default + named.
    I am planning to  move the named SQL pair from WSFC 2008 + SQL2012 TO WSFC 2012 + SQL2014. It has only one AG group with 4 TB of DBs.
    Do you guys have any experience and options for this migration. I am planning to go with  backup/restore. Let me share yours inputs.
    1. Backup/restore 2. cross cluster migration 3.Detach/attach 4. SAN migration
    Thanks!
    Muthukkumaran Kaliyamoorthy
    Helping SQL DBAs and Developers >>> SqlserverBlogForum

    Since WSFC does not support rolling upgrades, you will need to have another WSFC running Windows Server 2012/R2 (R2 is highly recommended due to the features available for maintaining high availability) and do a cross-cluster migration. Refer to this documentation
    for more details.
    https://msdn.microsoft.com/en-us/library/jj873730.aspx
    This requires that you're fairly comfortable with PowerShell.
    The main challenge here is dealing with hardware resources since you will need additional servers when moving into the new WSFC. You would have to run multiple instances on the same cluster node while you remove a node and repurpose it for WSFC 2012/R2.
    This is where proper planning is very important to make sure you are maximizing your resources while minimizing downtime. I cannot speak about the capabilities of your SAN as each vendor will have different features but my approach has always been dependent
    on SLAs and recovery objectives. Backup/restore will still be needed to synchronize your data from the old WSFC to the target WSFC.
    Edwin Sarmiento SQL Server MVP | Microsoft Certified Master
    Blog |
    Twitter | LinkedIn
    SQL Server High Availability and Disaster Recover Deep Dive Course

  • Does moving SP code from DB to App Server help scale an application?

    We have an application developed using Oracle 10g Forms / 10g DB? All our processing is done using SPs. So they all run in the DB server. Even our Inserts/updates/deletes to a table are handled by SPs.
    The site with the maximum simultaneous users (i.e. concurrent users) is one with 100 concurrent users.
    We have prospective customer whose requirement is 300 concurrent users. Our application won't be able to handle it since the DB server is a single processor server with limited memory.
    One suggestion was move the SPs to the App Server by moving them to the Form. Since OAS has a PL engine they will run in the App Server and hence remove the workload of the DB.
    I don't buy this. My point is, even if SPs are moved to the app. server still the SQLs will run in the DB server, right?
    So what is the advantage?

    christian, I just modified the original post thinking nobody will reply since it's very long. Thanks a lot for the reply. For others and myself also here is my original question.
    I have a problem like this: Take this scenario. We have a TELCO app. It is an E-Business Web Application (i.e. Dynamic Web Site) developed using ASP.Net/C#. App. Server is IAS and DB is Oracle 10g. IAS and the DB reside in 2 servers. Both are single processor servers.
    The maximum simultaneous user load is 500. i.e. 500 users can be working in the system at one time.
    Now suppose 500 users login at the same time and perform 500 different operations (i.e. querying, inserts, updates, deletes). Now all 500 operations will go to the App Server. From there the C# code will perform everything using Oracle stored procedures (SP). I.e. we first make a connection to the DB, SP is invoked by passing parameters, it will perform the operation in the DB, send the output to the App. Server C# code and we will close the Oracle connection (in App Server. C# code).
    Now, the 500 operations will obviously have to wait in a queue and the SQLs will be processed in the DB server.
    Now, question is how does CONNECTION POOLING help in this situation?
    I have been told that the above method of using DB SP to perform processing will make the whole system very slow since all the load of the processing has to borne by the DB Server and since DB Operations involve disk I/O it will be very slow. They say you cannot SCALE the application with this DB Processing mode and you have to move to App. Server processing mode in order to scale your application. I.e. If the number of users increases to 1000 our application won’t be able to handle it and will get very slow.
    What they suggest is to move all the processing to the App. Server (i.e. App. Svr. Memory). They also say that CONNECTION POOLING can help even further to improve the performance.
    I have some issues with this. First of all to get all the data to the App server memory for each user process from the DB will not only require disk I/O, it will also involve a network trip. That will obviously take time. Again the DB requests to fetch the data will have to wait in the DB queue. Then we do the processing in the App. Server memory and then we have to again write to the DB server which again will require a network trip and disk I/O. According to my thinking won’t this take MORE TIME than doing it in the DB server??
    Also how can CONNECTION POOLING help. In C# we OPEN a connection ONLY just before run the SP or get the data and we close the connection just after the operation is over.
    I don’t’ see how CONNECTION POOLING can improve anything?
    I also don’t see how moving data into the App. Server from the DB Server can improve the performance. I think that will only decrease performance.
    Am I right or have I missed something?
    Edited by: user12240205 on Nov 17, 2010 2:04 AM

  • Help with Moving Emails from Exchange Server 2013

    Hi Team,
    Help, I need a solution on how to copy incoming and outgoing emails together with their attachments and was hoping someone could help with a solution I need to find for this problem.
    1) I have a need to copy all incoming and outgoing emails & attachments from Exchange Server 2013 mailboxes on a daily basis, so they can be archived into an external 3rd party database overnight.
    I believe that the first step is that I can set up another Exchange mailbox, through Journaling to receive these emails. 
    Is there any way that these emails can then be moved direct from the Journaling mailbox that I have created, to a shared folder on the network? If so what format would they be sent out as e.g. .msg, .ems and what is the process?
    Failing that, could they be sent to an Outlook client (not another Exchange mailbox) on the network and stored in a .pst file? (I have a program that will export them from .pst)
    I would really appreciate any assistance that you can provide in this matter.
    Regards, Greg.

    Hi,
    You can set auto-forward on journaling mailbox and auto-forward emails to another mailbox. Why don’t directly archive the emails from the journaling mailbox “an external
    3rd party database”.
    Thanks,
    Simon Wu
    TechNet Community Support

  • Moving table from oracle 10g to sql server 2005 plz help!!!

    Hi All,
    I have a table in oracle that i have to move to sql server.
    I have exported the table and the dmp file is around 150 MB.
    I do not know how to import this file into sql server.
    Can some one kindly update me as to which is the best way to do this,
    i know the best people to answer this would be the sql server techs but just wanted to try my luck on OTN.
    regards,

    Hello,
    you could use the Database Gateway for MS SQL Server, create a database link that uses the gateway, and then transfer the data from Oracle to SQL Server using in SQLPlus a command like the following:
    copy from scott/tiger@ora102 -
    insert TEST1@dg4msql -
    using select * from test1@ora102 ;
    Another solution is using a PL/SQL block, and how to do it is described in the following note in My Oracle Support:
    "Insert Into Remote Table Select * From Local Table" Gives Error ORA-02025 Using DG4MSQL (Doc ID 790572.1)
    I don't know whether it is the "best way to do it", but it is an alternative.
    For inserting a flat file into SQL Server you really need to check with Microsoft. Or you can use 3rd party software: http://www.dbload.com/
    Best regards
    Wolfgang

  • Very urgent-moving ora9i db from win2k to win2003 server

    hi
    i need step by step solution to move oracle 9i database from win2k to win2003 server.
    i thank u in advance who help me at right time.

    Steps:
    1. install oracle software in windows 2003.
    2. shutdown database 9i
    3. copy all files in old database to windows 2003 server.
    4. use "oradim" to create an instance
    5. change some locations for datafiles/controlfiles/logfiles... if needed
    6. startup your 9iDB in windows 2003.

  • Since upgrading to Mavericks, my Canon MP980 printer no longer works. I've tried deleting the printer and adding it again. the process ends with my mac saying the software is not currently available from the software update server.

    Since upgrading to Mavericks, my Canon MP980 printer no longer works. I've tried deleting the printer and adding it again. I've downloaded the driver software from Canon. My Mac dropbox tells me "the selected printer software is available from Apple", but when I click to download it, the process ends with my mac saying the software is not currently available from the software update server.

    Open Finder and click on Go > Go to Folder. Then type
    /Library/Printers/Canon
    and press Enter. This will show the BJPrinter and iJScanner folders. You could try moving or trashing this folder and then installing the driver that you downloaded from Canon again.

  • Moving personalization from one instance to another.

    Hi All,
    Could anyone guide me with the steps of moving personalizations from one istance to another..
    I tried this syntax..
    FNDLOAD <userid>/<password> 0 Y DOWNLOAD $FND_TOP/patch/115/import/affrmcus.lct <filename.ldt> FND_FORM_CUSTOM_RULES form_name=<form name>
    what could be the filename.ldt location,
    should i execute this from putty on unix box...
    Regards, Ashish
    Edited by: Ashish on Aug 30, 2010 9:59 AM

    Ashish,
    I am followin that document, but it does not tell me where do i execute the script.. Please if you could..Run the command on the server as applmgr user and after sourcing the application env file. You can run the command anywhere provided that the directory is writable by applmgr user.
    Thanks,
    Hussein

  • Error moving mailbox from 2003 Exchange to 2010 Exchange

    Hi everyone
    I am having some trouble moving a mailbox from our Exchange 2003 server and over to our Exchange 2010 server. at this point i have moved 300 users, but this one here is causing some grief. It stops at 10% Any ideas????
    23/06/2012 21:35:28 [VMPRIEXCH] 'ad.DNSArrow.co.uk/Admin Accounts/RJ Admin' created move request.
    23/06/2012 21:35:28 [VMPRIEXCH] 'ad.DNSArrow.co.uk/Admin Accounts/RJ Admin' allowed a large amount of data loss when moving the mailbox (250 bad items).
    23/06/2012 21:35:30 [VMPRIEXCH] The Microsoft Exchange Mailbox Replication service 'VMPRIEXCH.ad.DNSArrow.co.uk' (14.2.247.1 caps:07) is examining the request.
    23/06/2012 21:35:30 [VMPRIEXCH] Connected to target mailbox 'Primary (c8e0ea6b-eec7-4da0-8ea5-e031a91cc984)', database 'SHAREDMBDB', Mailbox server 'VMPRIEXCH.ad.DNSArrow.co.uk' Version 14.2 (Build 247.0).
    23/06/2012 21:35:30 [VMPRIEXCH] Connected to source mailbox 'Primary (c8e0ea6b-eec7-4da0-8ea5-e031a91cc984)', database 'EX01\SG2\Mailbox Store SG2 C', Mailbox server 'ex01.ad.DNSArrow.co.uk' Version 6.0 (Build 7654.0).
    23/06/2012 21:35:41 [VMPRIEXCH] Request processing started.
    23/06/2012 21:35:41 [VMPRIEXCH] Mailbox signature will not be preserved for mailbox 'Primary (c8e0ea6b-eec7-4da0-8ea5-e031a91cc984)'. Outlook clients will need to restart to access the moved mailbox.
    23/06/2012 21:35:41 [VMPRIEXCH] Source mailbox information before the move:
    Regular Items: 189917, 5.272 GB (5,660,371,162 bytes)
    Regular Deleted Items: 2491, 135 MB (141,547,842 bytes)
    FAI Items: 2490, 0 B (0 bytes)
    FAI Deleted Items: 0, 972 B (972 bytes)
    23/06/2012 21:35:49 [VMPRIEXCH] Fatal error MapiExceptionNotFound has occurred.
    Error details: MapiExceptionNotFound: Unable to GetSearchCriteria. (hr=0x8004010f, ec=-2147221233)
    Diagnostic context:
        Lid: 45095   EMSMDB.EcDoRpcExt2 called [length=48]
        Lid: 61479   EMSMDB.EcDoRpcExt2 returned [ec=0x0][length=98][latency=0]
        Lid: 23226   --- ROP Parse Start ---
        Lid: 27962   ROP: ropGetSearchCriteria [49]
        Lid: 31418   --- ROP Parse Done ---
        Lid: 45095   EMSMDB.EcDoRpcExt2 called [length=53]
        Lid: 61479   EMSMDB.EcDoRpcExt2 returned [ec=0x0][length=48][latency=15]
        Lid: 23226   --- ROP Parse Start ---
        Lid: 27962   ROP: ropLtidFromId [67]
        Lid: 17082   ROP Error: 0x8004010F
        Lid: 17505  
        Lid: 21921   StoreEc: 0x8004010F
        Lid: 31418   --- ROP Parse Done ---
        Lid: 22753  
        Lid: 21817   ROP Failure: 0x8004010F
        Lid: 30894  
        Lid: 24750   StoreEc: 0x8004010F
        Lid: 29358  
        Lid: 27950   StoreEc: 0x8004010F
        Lid: 29310  
        Lid: 23998   StoreEc: 0x8004010F
        Lid: 29329  
        Lid: 19729   StoreEc: 0x8004010F
        Lid: 23185  
        Lid: 25233   StoreEc: 0x8004010F
       at Microsoft.Mapi.MapiExceptionHelper.ThrowIfError(String message, Int32 hresult, SafeExInterfaceHandle iUnknown, Exception innerException)
       at Microsoft.Mapi.MapiContainer.GetSearchCriteria(Restriction& restriction, Byte[][]& entryIds, SearchState& state)
       at Microsoft.Exchange.MailboxReplicationService.LocalFolder.Microsoft.Exchange.MailboxReplicationService.IFolder.GetSearchCriteria(RestrictionData& restriction, Byte[][]& entryIds, SearchState& state)
       at Microsoft.Exchange.MailboxReplicationService.FolderWrapper.<>c__DisplayClass19.<Microsoft.Exchange.MailboxReplicationService.IFolder.GetSearchCriteria>b__18()
       at Microsoft.Exchange.MailboxReplicationService.ExecutionContext.Execute(GenericCallDelegate operation)
       at Microsoft.Exchange.MailboxReplicationService.FolderWrapper.Microsoft.Exchange.MailboxReplicationService.IFolder.GetSearchCriteria(RestrictionData& restriction, Byte[][]& entryIds, SearchState& state)
       at Microsoft.Exchange.MailboxReplicationService.FolderRecWrapper.EnsureDataLoaded(IFolder folder, FolderRecDataFlags dataToLoad, ReportBadItemsDelegate reportBadItemsDelegate)
       at Microsoft.Exchange.MailboxReplicationService.MailboxWrapper.<>c__DisplayClass4`1.<LoadFolders>b__0()
       at Microsoft.Exchange.MailboxReplicationService.ExecutionContext.Execute(GenericCallDelegate operation)
       at Microsoft.Exchange.MailboxReplicationService.MailboxWrapper.LoadFolders[TFolderRec](FolderRecDataFlags dataToLoad, PropTag[] additionalPtags, GenericCallDelegate abortDelegate, ReportBadItemsDelegate reportBadItemsDelegate)
       at Microsoft.Exchange.MailboxReplicationService.MailboxWrapper.GetFolderMap[TFolderRec](FolderRecDataFlags dataToLoad, PropTag[] additionalPtags, GenericCallDelegate abortDelegate, ReportBadItemsDelegate reportBadItemsDelegate)
       at Microsoft.Exchange.MailboxReplicationService.MailboxCopierBase.GetSourceFolderMap(GetFolderMapFlags flags, FolderRecDataFlags dataToLoad, GenericCallDelegate abortDelegate)
       at Microsoft.Exchange.MailboxReplicationService.MoveBaseJob.<CreateFolderHierarchy>b__2d(MailboxMover mbxCtx)
       at Microsoft.Exchange.MailboxReplicationService.MoveBaseJob.ForeachMailboxContext(MailboxMoverDelegate del)
       at Microsoft.Exchange.MailboxReplicationService.MoveBaseJob.CreateFolderHierarchy(Object[] wiParams)
       at Microsoft.Exchange.MailboxReplicationService.CommonUtils.CatchKnownExceptions(GenericCallDelegate del, FailureDelegate failureDelegate)
    Error context: --------
    Operation: IFolder.GetSearchCriteria
    OperationSide: Source
    Primary (c8e0ea6b-eec7-4da0-8ea5-e031a91cc984)
    Search folder: 'MS-OLK-BGPooledSearchFolder28E6C24EFE39F845A1C7309FF139D54F', entryId [len=46, data=00000000ED492DE46CCB2941A6EFC600B0DC8ABD010062EB1CE91779304984F59E79C254E5430000033BC7BF0000], parentId [len=46, data=00000000ED492DE46CCB2941A6EFC600B0DC8ABD010062EB1CE91779304984F59E79C254E54300000015852B0000]
    23/06/2012 21:35:50 [VMPRIEXCH] Removing target mailbox 'Primary (c8e0ea6b-eec7-4da0-8ea5-e031a91cc984)' due to an offline move failure.
    23/06/2012 21:35:50 [VMPRIEXCH] Relinquishing job.
    Regards
    Ronnie
    Ronnie Jorgensen | MCTS Windows Server 2008

    Hi ,
    Maybe we can configure BadItemLimit for it .
    New-MoveRequest:
    http://technet.microsoft.com/en-us/library/dd351123.aspx
    More information for your reference.
    Can not move Exchange 2003 mailboxes to Exchange 2010:<//span>
    http://social.technet.microsoft.com/Forums/en-US/exchangesvrmigration/thread/89902f00-4b84-4f10-b909-121a81241c85/
    Wendy Liu
    TechNet Community Support
    Hi Wendy
    BadItemLimit is already set to 250 mails and with accept data loss
    New-MoveRequest -Identity 'vquotes' -TargetDatabase "DB06" -BadItemLimit 250 -AcceptLargeDataLoss
    So that is not the problem :)
    Ronnie Jorgensen | MCTS Windows Server 2008

  • Moving Away From Auto Modes

    Moving Away From Auto-Exposure   
    Exposure modes
    Auto-exposure mode on your camera does provide generally good photographs, but taking full advantage of the more advanced capabilities of your digital camera will provide even better results. Most modern camera’s have a number of preset exposure modes, and some more advanced cameras’ (especially DSLR’s) have two semi-manual exposure modes and full manual exposure control.
      Portrait mode
    The most common preset is the portrait setting. This mode should have the flash on at all times in case it is needed for correct exposure, as well as isolating the subject in a way that the background and foreground are out of focus and only the subject is in focus.
    Action mode
    Another common preset is the action setting. As the name implies this is a great setting to use if you are photographing sports, or any subject that is moving fast and you want to stop its action.
    Landscape mode
    Landscape preset is also a common preset mode on modern cameras. This mode should have the flash off since the subject is most often outside and well lit, and this mode should also keep detail in the foreground and especially the background in focus.
    Macro mode
    For those of you that like taking images of small subjects (i.e. flowers) most digital cameras also have a preset called macro mode. Macro mode should have the flash on at all times since at higher magnifications even the slightest movement of the subject will blur the image. The flash not only effectively stops the movement of the subject but also evenly illuminates the subject.
    Night mode
    Night mode as the name implies is a preset some camera’s have for taking photographs at night. This mode is generally the least useful preset due to the fact that nighttime photography is perhaps the most challenging photography there is. This mode will turn the flash off, and also turn what’s called noise reduction on. Noise reduction is a setting that helps eliminate the digital noise that is caused by low levels of light on digital camera sensors.
    Semi-manual modes
    Most DSLR’s and some more advanced point and shoot cameras have two semi-manual modes and full manual mode. The semi-manual modes are aperture priority which generally controls what is in focus by adjusting the aperture (the size of the opening in the lens), and shutter priority controls how long the shutter is open which either will blur the image or stop action. The last mode is full manual which allows the user to control both the shutter and the aperture to get the correct exposure allowing the greatest creative control over your images, but requiring the most expertise.
    Moving Away From Auto-Flash   
    Turn off your Auto-Flash
    Most digital camera’s while set on auto-flash, especially point and shoot cameras, don’t allow the user to choose when the flash is on or off. This will usually produce adequately exposed images, but just because your subject is correctly exposed doesn’t mean it’s lighted how it should be. Controlling how you light your subject will make the difference between good images and great ones. To do this you should definitely consider moving away from auto-flash.
    Fill Flash
    When shooting outdoors the camera might read that there is enough light and not fire the flash, but if your shooting a subject that is back lit or top lit a small amount of flash will light up your subject and provide superior results ( this is called fill flash ).
    Red eye reduction
    Another flash setting that most cameras provide is red eye reduction. This setting will fire a series of flashes at your subject to contract their pupils so the flash doesn’t reflect the red from their retinas back to the sensor. This mode works fairly well, but due to the time needed for the pre flashes your subject often will have changed by the time the image is actually taken.
    Allan
    Community Connector
    Best Buy® Corporate
    Allan|Senior Social Media Specialist | Best Buy® Corporate
     Private Message

    Hey Gabriel,
    You don't have to feel sorry.
    I'm sorry that you feel sorry to people who dislike your products.
    Could you tell me how to precompile JSP files and deploy? And, I would like to turn off JSP compilation functionality on live servers completely.
    What's the benefit of using database pm? I'm aware of tarpm bug where it deletes data tar files during tarpm optimization and index merge. Does databasepm have the same problem?
    If I use databasepm, can I be free from regular maintainance such as datastore garbage collection, tarpm optimization..etc? I find it very difficult to run such maintainance scripts on a live server where the repository is heavily utilized 24/7.
    I wouldn't mind shutting down the server and run those maintanence offline. Actually, it would be much better if I could run through maintanance offline.
    Also, I heard that there is a bug for share nothing clustering for CQ5.4. I'm not sure if this is jackrabbit bug. If CQ instances are all clustered and I can take one out at a time for offline maintance, it would be superb.
    And, what about backups and restoration? Why aren't there set of maintenance scripts? There are documentations with script snippets on knowledgebase... But why aren't there official maintenance tools?
    And, where is a book? There are documentations that explain what dispatcher is, what repository is, how to develop custom components, how to configure repositories, ... etc. But why isn't there a book that covers architecture for production site, application development practices, maintenance know-hows?
    Does this all mean that Adobe does not care for CQ, and it will not pursue CQ as a viable technology stack for ADEP WEM? (which will be an excellent news by the way).
    Or is it Adobe's scheme to monetize support  (less publicaly available documents/books.. more support tickets)?

  • Moving Mailserver from Xserve G4 to intel, Best practice?, Recommendations?

    Hi!
    I will receive a new Xserve intel soon and possibly want to move mail services from the currently used Xserve G4 (which is working fine) to the new Xserve intel.
    The Xserve G4 is running a heavily modified mail setup thanks to pterobyte's excellent tutorials on fixing, updating, extending, dare I say "pimping" Mac OS X Server's mailserver setup.
    What I want to achieve in the long run:
    Have Mail services run on the Xserve intel and have the Xserve G4 work as a mailbackup. (They will be connected via permanent VPN, but be in different LANs on different ISPs). They shall be serving email for at least three distinct domains then. (All low volume. currently the G4 is serving a single domain using WGM aliases.) I want (and need) to switch to postfix aliases.
    What I need to consider:
    My client desperately wants/needs to update to Leopard server once it becomes available. Both Xserve definitely will be upgraded to Leopard Server then.
    Time is not an issue at the moment as the G4 is working very well. I want to keep the work at a minimum in regard to the Leopard switch. I am fine with an interim solution, even if it is somewhat inelegant, as long as it runs fine. The additional domains are not urgent at the moment. It will be fine when they transfer to the intel Xserve once we run Leopard.
    Questions:
    Does it pay to do all the work moving from the G4 to the intel (I'd need to compile and configure all the SpamAssassin, ClamAV, Amavisd-New, etc. again...) move all the Mailboxes, Users, IMAP and SMTP. In regard that there will be a clean install once Leopard comes out. (I am definitely no fan of Updating a Mac OS X server. Experience has proven to me that this does not work reliably.)
    Are there any recommendations or best practice hints from your experience when moving a server from PPC to intel?
    Thanks in advance
    MacLemon

    By all means do a clean install. If time is not an issue, make sure Leopard has been on the market 2-3 months before you do so.
    Here is what I would do:
    1. Clean install of Intel Server
    2. Update all components
    3. Copy all needed configuration files from PPC to Intel Server
    4. Backup PPC mail server with mailbfr
    5. Restore mail backup with mailbfr to Intel Server
    This is all that needs to be done.
    If you want to keep the G4 as a backup server, just configure it as a secondary MX in case your primary is down. Trying to keep mailboxes redundant is only possible in a cluster and a massive pain to configure (Leopard should change that though).
    HTH,
    Alex

  • Move WIKI data from one Mountain Lion Server to another

    Hi.
    I followed the instruction here:
    http://support.apple.com/kb/HT5585
    Under Copying all wikis from one OS X server to another OS X server, I am not even able to execute:
    sudo pg_dump --format=c --compress=9 --blobs --username=collab --file=/tmp/collab.pgdump collab
    It gives this error:
    pg_dump: [archiver (db)] connection to database "collab" failed: FATAL:  role "collab" does not exist
    Any idea?
    I just tested it on the productive server as well as a brand new install. Same outcome.
    which then I moved the: /Library/Server/Wiki/FileData over but even stop/start, restart, wiki server is running but not able to load content, it's like it's been wiped clean.
    matthew

    try it and RATE correct answers
    Hello Matthew
    You're looking in the wrong spot
    First things first - make yourself default sudo with sudo -s then you can forget prefixing it all the time.
    If you just use pg_dump then it'll take the command from the /var-Directory - that's the wrong version
    You have to specify the path for the Socket where the PSQL-Database for the wiki really is located by using the -h-option - it's not the default
    that's why you get the error that role collab does not exist since you're connecting to a database in place where the role collab truy isn't part of it.
    So - if you'd like to export the wiki-DB us the following and adapt the filename to what you like it to be.
    bash-3.2# /Applications/Server.app/Contents/ServerRoot/usr/bin/pg_dump -h "/Library/Server/PostgreSQL For Server Services/Socket/" -p 5432 -f /Volumes/USBSTICK/wikidatabase.pgdump -U collab collab
    The first block specifies the "not default" pg_dump you'd like to use
    The second block (-h "/Library/.....) tells pg_dump where to find the DB
    The third block tells pg_dump to use port 5432
    The fourth block (-f /Volumes/......) tells pg_dump to place its output into this file
    The fifth block (-U collab) tells pg_dump to do this is role collab
    The sixth block tells pg_dump from with DB to dump from
    In your case extend my provided command with your options --format=c --compress=9 --blobs like this:
    bash-3.2# /Applications/Server.app/Contents/ServerRoot/usr/bin/pg_dump -h "/Library/Server/PostgreSQL For Server Services/Socket/" -p 5432 -F c --compress=9 -b -f /Volumes/USBSTICK/wikidatabase.pgdump -U collab collab
    BTW- you can connect to the database, of course:
    bash-3.2# psql -h "/Library/Server/PostgreSQL For Server Services/Socket/" -p 5432 collab collab
    try it and RATE correct answers
    Here is my thread https://discussions.apple.com/thread/5751873

  • Migration scenario from 3rd party mail server to GW12?

    Hi
    We need to migrate from existing e-mail server (Axigen ) to GW12.
    The main concern is how to do that as quickly as possible or step by step without disrupting mail traffic and users.
    Here is the situation:
    In local network there is mentioned Axigen mail server serving aprox 70 users at single location.
    Our official domain is hosted outside by Hosting provider where their mail server forwards all mail to our DDNS address where is ported to local mail server. Forwarding is performed discrete on user base. Each [email protected] is forwarded to [email protected]. where local server taking over.
    This setting is harder to maintain because it is necessary to maintain users and aliases on outside server manually. However, this setting has some advantages. First of all, forwarding is happened immediately. If for some reason , local server is unreachable, outside server will keep on trying for two more days.
    Second advantage is that we can use anti spam and virus blocking at the providers side.
    Both functionality we could'nt achieve with simple global forwarding of all mail to local server.
    This comment is digresion from main topic.
    Outgoing mail is relayed by local server over our Internet Connection provider's SMPT host.
    Local mail is delivered locally.
    We maintain local DNS server too.
    Most users are using POP service, while IMAP is used by few only. All users are using Thunderbird so far.
    All users in the future should use full GW capabilities with regular GW client, mobile clients and so on.
    GW12 server is installed and up already. It is tested locally and tested for remote sending. So, only remote receiving is not tested yet, so neither correct domain is set yet.
    Present users authentication is performed by Axigen server and LDAP connection to eDir. STARTTLS is used between clients and server.
    Our plan is simple to change local DNS record point to new GW server, to change NAT/routing point to new server and set correct domain for the GW server.
    We expect that this will be transparent for users while they will continue to use Thunderbird (certificate mismatch will be issued by first connection and first mail sending. But we plan to warn users in advance together with instructions what to do. My phone will ring that morning many times, I am sure). Few IMAP accounts we can prepare in advance by move mail manually to local folders and back to GW server later.
    Latest step should be migrate users from Thunderbird to GW client. This need to be perfomed manually at each user seat. Each Thunderbird client need to be switched to IMAP and then mail need to move to GW server.
    Here is concern about old mail volume. Many users have their old mail archived in local folders by years. Some of them has quite huge amount of archived mails with heavy attachments. However old mail can be moved gradually later.
    So, this is our plan. Does anybody has any suggestion, comment or wish to point some problem we didn't aware of?
    Thank you.
    Drazen

    On 9/24/2012 3:56 AM, dzuvela wrote:
    >
    > Hi
    > We need to migrate from existing e-mail server ('Axigen '
    > (http://www.axigen.com/)) to GW12.
    > The main concern is how to do that as quickly as possible or step by
    > step without disrupting mail traffic and users.
    >
    > Here is the situation:
    > In local network there is mentioned Axigen mail server serving aprox 70
    > users at single location.
    > Our official domain is hosted outside by Hosting provider where their
    > mail server forwards all mail to our DDNS address where is ported to
    > local mail server. Forwarding is performed discrete on user base. Each
    > [email protected] is forwarded to [email protected]. where local
    > server taking over.
    >
    > This setting is harder to maintain because it is necessary to
    > maintain users and aliases on outside server manually. However, this
    > setting has some advantages. First of all, forwarding is happened
    > immediately. If for some reason , local server is unreachable, outside
    > server will keep on trying for two more days.
    > Second advantage is that we can use anti spam and virus blocking at the
    > providers side.
    > Both functionality we could'nt achieve with simple global forwarding of
    > all mail to local server.
    > This comment is digresion from main topic.
    >
    > Outgoing mail is relayed by local server over our Internet Connection
    > provider's SMPT host.
    > Local mail is delivered locally.
    >
    > We maintain local DNS server too.
    >
    > Most users are using POP service, while IMAP is used by few only. All
    > users are using Thunderbird so far.
    >
    > All users in the future should use full GW capabilities with regular GW
    > client, mobile clients and so on.
    >
    > GW12 server is installed and up already. It is tested locally and
    > tested for remote sending. So, only remote receiving is not tested yet,
    > so neither correct domain is set yet.
    >
    > Present users authentication is performed by Axigen server and LDAP
    > connection to eDir. STARTTLS is used between clients and server.
    >
    >
    > Our plan is simple to change local DNS record point to new GW server,
    > to change NAT/routing point to new server and set correct domain for the
    > GW server.
    > We expect that this will be transparent for users while they will
    > continue to use Thunderbird (certificate mismatch will be issued by
    > first connection and first mail sending. But we plan to warn users in
    > advance together with instructions what to do. My phone will ring that
    > morning many times, I am sure). Few IMAP accounts we can
    > prepare in advance by move mail manually to local folders and back to GW
    > server later.
    >
    > Latest step should be migrate users from Thunderbird to GW client. This
    > need to be perfomed manually at each user seat. Each Thunderbird client
    > need to be switched to IMAP and then mail need to move to GW server.
    > Here is concern about old mail volume. Many users have their old mail
    > archived in local folders by years. Some of them has quite huge amount
    > of archived mails with heavy attachments. However old mail can be moved
    > gradually later.
    >
    >
    > So, this is our plan. Does anybody has any suggestion, comment or wish
    > to point some problem we didn't aware of?
    >
    > Thank you.
    > Drazen
    >
    >
    most of this sounds good. With such heavy imap/pop usage, you MIGHT
    want to consider installing a temporary dedicated gwia just for that
    separate from the smtp handling gwia. But that might not be necessary at
    all. 70 people isn't THAT much.

  • Moving DNS to Mac OS X Server

    I've been using QuickDNS on Mac OS 9 for years but am now moving to OS X Server. The administration is very different and I find the documentation from Apple not very clear. (Guess they don't know how to to a screen shot to show examples.) Can someone give me a quick, clear run down on how i can get my existing domains set up in OS X Server?
    1.6GHz PM G5 - 250GB, 1GB, ATI9600.   Mac OS X (10.4.6)  

    The easiest way to move your zone files to the Mac is to setup the Mac as a secondard DNS server and have it pull the zone files from the master QuickDNS server.
    I'm guessing from your post that you want to use the Mac as the main server moving forwards. In that case if you're used to QuickDNS I would recommend upgrading to the latest version and using that to manage your domains.
    Apple's GUI for managing DNS pales in comparison to QuickDNS.
    If you can do without the GUI and configure DNS manually you should be OK, but if you're looking for a GUI option you'll be disappointed with Server Admin.

Maybe you are looking for

  • XML Parsing with namespace

    Hi All, I have a requirement to convert XML to itab to be processed further. I couldn't parse the XML which contains ns0 (namespace?) as following: <ns0:OrderNo>887140</ns0:OrderNo> <ns0:CPROD>BD</ns0:CPROD> <ns0:CURR>USD</ns0:CURR> I'm able to parse

  • Extraction Queue - SMQ1 - entries..

    HI All, We are running some data loads from source SRM  to BI . We have a Strange Issue: Without triggering process chains or Infopack groups in BI side, Extraction queue’s (SMQ1 – TRFC’s ) are filled  automatically with some entries. It will causes

  • Recovering Lost iTunes Music Files

    Aight, so i accidently did a System Recovery. and i had never backed up my files (stupid! i know.. =\ ) and i lost everything!! At first, iTunes wasnt even working for me, so finally i uninstalled it and quicktime and then reinstalled them both, agai

  • CSV produces a number

    since I didn't get any conclusion from this thread, I decided to open a new one. [ref: http://forum.java.sun.com/thread.jspa?threadID=5194217] I have a CSV file in which I would like to write the STRING 0607 but for some reason the output comes as: 6

  • Authenticating to Win2k3 (active directory)

    OK, here I sit. I am a spawn of B. Gates and have purchased my first MAC. It is a 24" IMac w/Mac OS X 10.4.8 (Tiger). The unit came up perfectly and it had no problems in locating my DHCP server and leasing an IP address. I can see the other machines