Scheduled Data Refresh fails with 400 error

I've setup a PowerPivot Model, connected 1.5 Million rows (57MB file) rendered the data in a powerpivot table and uploaded it to an PowerBI enabled BI site in O365.
Report renders online
Q&A works too
Configured data management gateway
Connected DMG to PowerBI
Configure data source with the same Connection String
Set credentials for data source with Status OK (only can do this step from a domain joined computer which I find odd)
Enabled Data Refresh on the Report with status OK (i.e. it found the corresponding data connection on the gateway)
Tried to manually refresh the data.... no luck. keeps on failing with the following error message
Failure Correlation ID: c1dcf840-e0c8-45d6-9720-b4d2c9695b5a
Errors in the high-level relational engine. The following exception occurred while the managed IDataReader interface was being used: The operation was canceled.;transfer service job status is invalid Response status code does not indicate success:
400 (Bad Request).. The current operation was cancelled because another operation in the transaction failed.
Everything seems properly configured. lots of green ticks whenever I set things up. But manual refresh keeps on failing.

I've reduced the query down to 100,000 records and now I'm getting
Sorry, something went wrong. Please try again. Correlation ID: 7e498494-95d0-4dff-84ed-ead169a5617e
The DMG is not even registering the attempt in the event logs.
Before at least I managed to get this:
Microsoft.DataTransfer.Common.Shared.HybridDeliveryException: The Data Transfer Service has encountered a fatal error when performing the data upload. ---> Microsoft.WindowsAzure.Storage.StorageException: The client could not finish the operation within
specified timeout. ---> System.TimeoutException: The client could not finish the operation within specified timeout.
   --- End of inner exception stack trace ---
   at Microsoft.WindowsAzure.Storage.Core.Util.StorageAsyncResult`1.End()
   at Microsoft.DataTransfer.ClientLibrary.BlobUploadTask.PutBlockCallback(IAsyncResult asyncResult)
   --- End of inner exception stack trace ---
   at Microsoft.DataTransfer.ClientLibrary.BlobBinarySink.Write(IEnumerable`1 streams)
   at Microsoft.DataTransfer.ClientLibrary.BinaryTransfer.Run()
   at Microsoft.DataTransfer.TransferTask.TransferRuntimeTask.Execute()
   at Microsoft.DataTransfer.TaskHosting.ThreadTaskWorker.RunTask()
Job ID: edd0fec1-789a-4487-a371-6aafbd8aaffa
Task ID: b71f8b48-b4ff-43f5-8dc5-6aefd390c0fe
Queue ID: a41a0d92-8327-458a-80e0-2cf3065dd705
Log ID: TaskExecutionFailed
So far my analysis is:
1.5M rows = won't even start pushing data and will timeout very early in the process
300,000 rows = will get past the initial timeout (probably the query identifying how much data needs to move), just to crash on the second timeout (actually pushing the data to the cloud).

Similar Messages

  • Cube refresh fails with an error below

    Hi,
    We are experiencing this problem below during planning application database refresh. We have been refreshing the database everyday, but all of a sudden the below error is appearing in log. The error is something like below:
    Cube refresh failed with error: java.rmi.UnmarshalException: error unmarshalling return; nested exception is:
    java.io.EOFException
    When the database refresh is done from workspace manually, the database refresh is happening successfully. But when triggered from unix script, its throwing the above error.
    Is it related to some provisioning related issue for which user has been removed from MSAD?? Please help me out on this.
    Thanks,
    mani
    Edited by: sdid on Jul 29, 2012 11:16 PM

    I work with 'sdid' and here is a better explaination of what exactly is going on -
    As part of our nightly schedule we have a unix shell script that executes refresh of essbase cubes from planning using the 'CubeRefresh.sh' shell script.
    Here is how our shell looks like -
    /opt/hyperion/Planning/bin/CubeRefresh.sh /A:<cube name> /U:<user id> /P:<password> /R /D /FS
    Here is what 'CubeRefresh.sh' looks like -
    PLN_JAR_PATH=/opt/hyperion/Planning/bin
    export PLN_JAR_PATH
    . "${PLN_JAR_PATH}/setHPenv.sh"
    "${HS_JAVA_HOME}/bin/java" -classpath ${CLASSPATH} com.hyperion.planning.HspCubeRefreshCmd $1 $2 $3 $4 $5 $6 $7
    And here is what 'setHPenv.sh' looks like -
    HS_JAVA_HOME=/opt/hyperion/common/JRE/Sun/1.5.0
    export HS_JAVA_HOME
    HYPERION_HOME=/opt/hyperion
    export HYPERION_HOME
    PLN_JAR_PATH=/opt/hyperion/Planning/lib
    export PLN_JAR_PATH
    PLN_PROPERTIES_PATH=/opt/hyperion/deployments/Tomcat5/HyperionPlanning/webapps/HyperionPlanning/WEB-INF/classes
    export PLN_PROPERTIES_PATH
    CLASSPATH=${PLN_JAR_PATH}/HspJS.jar:${PLN_PROPERTIES_PATH}:${PLN_JAR_PATH}/hbrhppluginjar:${PLN_JAR_PATH}/jakarta-regexp-1.4.
    jar:${PLN_JAR_PATH}/hyjdbc.jar:${PLN_JAR_PATH}/iText.jar:${PLN_JAR_PATH}/iTextAsian.jar:${PLN_JAR_PATH}/mail.jar:${PLN_JAR_PA
    TH}/jdom.jar:${PLN_JAR_PATH}/dom.jar:${PLN_JAR_PATH}/sax.jar:${PLN_JAR_PATH}/xercesImpl.jar:${PLN_JAR_PATH}/jaxp-api.jar:${PL
    N_JAR_PATH}/classes12.zip:${PLN_JAR_PATH}/db2java.zip:${PLN_JAR_PATH}/db2jcc.jar:${HYPERION_HOME}/common/CSS/9.3.1/lib/css-9_
    3_1.jar:${HYPERION_HOME}/common/CSS/9.3.1/lib/ldapbp.jar:${PLN_JAR_PATH}/log4j.jar:${PLN_JAR_PATH}/log4j-1.2.8.jar:${PLN_JAR_
    PATH}/hbrhppluginjar.jar:${PLN_JAR_PATH}/ess_japi.jar:${PLN_JAR_PATH}/ess_es_server.jar:${PLN_JAR_PATH}/commons-httpclient-3.
    0.jar:${PLN_JAR_PATH}/commons-codec-1.3.jar:${PLN_JAR_PATH}/jakarta-slide-webdavlib.jar:${PLN_JAR_PATH}/ognl-2.6.7.jar:${HYPE
    RION_HOME}/common/CLS/9.3.1/lib/cls-9_3_1.jar:${HYPERION_HOME}/common/CLS/9.3.1/lib/EccpressoAll.jar:${HYPERION_HOME}/common/
    CLS/9.3.1/lib/flexlm.jar:${HYPERION_HOME}/common/CLS/9.3.1/lib/flexlmutil.jar:${HYPERION_HOME}/AdminServices/server/lib/easse
    rverplugin.jar:${PLN_JAR_PATH}/interop-sdk.jar:${PLN_JAR_PATH}/HspCopyApp.jar:${PLN_JAR_PATH}/commons-logging.jar:${CLASSPATH
    export CLASSPATH
    case $OS in
    HP-UX)
    SHLIB_PATH=${HYPERION_HOME}/common/EssbaseRTC/9.3.1/bin:${HYPERION_HOME}/Planning/lib:${SHLIB_PATH:-}
    export SHLIB_PATH
    SunOS)
    LD_LIBRARY_PATH=${HYPERION_HOME}/common/EssbaseRTC/9.3.1/bin:${HYPERION_HOME}/Planning/lib:${LD_LIBRARY_PATH:-}
    export LD_LIBRARY_PATH
    AIX)
    LIBPATH=${HYPERION_HOME}/common/EssbaseRTC/9.3.1/bin:${HYPERION_HOME}/Planning/lib:${LIBPATH:-}
    export LIBPATH
    echo "$OS is not supported"
    esac
    We have not made any changes to either the shell or 'CubeRefresh.sh' or 'setHPenv.sh'
    From the past couple of days the shell that executes 'CubeRefresh.sh' has been failing with the error message below.
    Cube refresh failed with error: java.rmi.UnmarshalException: error unmarshalling return; nested exception is:
    java.io.EOFException
    This error is causing our Essbase cubes to not get refreshed from Planning cubes through these batch jobs.
    On the other hand the manual refesh from within Planning works.
    We are on Hyperion® Planning – System 9 - Version : 9.3.1.1.10
    Any help on this would be greatly appreciated.
    Thanks
    Andy
    Edited by: Andy_D on Jul 30, 2012 9:04 AM

  • Error in schedule data refresh in Power BI

    Hi everybody,
    I have errors in schedule data refresh in Power BI. 
    In fact, whenever i plan an refresh, it says to me that there is an Error on Site (OnPremise): Sorry, the data source of this data connection is not registered with BI Power. Ask your BI Power Administrator to include the data source in the Administration
    Center Power BI. An error has occurred while processing the "X" table. The current operation was canceled because another operation in the transaction failed.
    I don't know what to do, please help :)

    Have you configured the Gateway and the data source and they pass test connection on the Power BI admin side?
    If you did, it will help if you share a correlation id and timeframe when this happened. If not, checkout
    this documentation
    GALROY

  • Planning Data Pull process failed with timeout error

    Hi Experts,
    Version: Oracle apps 11.5.10.2
    Issue: Planning Data Pull process failed with timeout error
    message in the log file as follows,
    The Request id : 90018907 has Phase : COMPLETE and Status: ERROR
    Concurrent Message : Timeout error.
    There is an Unknown error in the Worker.
    Planning Data Pull process failed.
    +-------------------------------------
    Please advice what could be the problem. I submitted, standard data collection programs with 900 mins and 8 workers.
    Refresh Collection Snapshot completed without any issue

    Please see these docs.
    Data Collection Fails Because Of Time Out Timeout error [ID 339968.1]
    OPM-ASCP: Data Collection Timeout Error [ID 601539.1]
    STD COLLECTION FAILING AT PLANNING DATA PULL WITH TIMEOUT ERROR [ID 978472.1]
    Data Collections is Failing - All Errors - First Diagnostic Steps [ID 207644.1]
    Troubleshooting Errors with ATP/Planning Data Collections [ID 1227255.1]
    MSCPDC PLANNING ODS LOAD ERRORS WITH TIMEOUT ERROR - POOR PERFORMANCE [ID 417633.1]
    Thanks,
    Hussein

  • 703: Subdaemon connect to data store failed with error TT9999

    All,
    I'm getting the following error whilst trying to connect to a TimesTen DB:
    connect "DSN=my_cachedb";
    703: Subdaemon connect to data store failed with error TT9999
    In the tterrors.log:
    16:39:24.71 Warn: : 2568: 3596 ------------------: subdaemon process exited
    16:39:24.71 Warn: : 2568: 3596 exited while connected to data store '/u01/ttdata/datastores/my_cachedb' shm 33554529 count=1
    16:39:24.71 Warn: : 2568: daRecovery: subdaemon 3596, managing data store, failed: invalidate (failcode=202)
    16:39:24.71 Warn: : 2568: Invalidating the data store (failcode 202, recovery for 3596)
    16:39:24.72 Err : : 2568: TT14000: TimesTen daemon internal error: Could not send 'manage' request to subdaemon rc -2 err1 703 err2 9999
    16:39:24.72 Warn: : 2568: 3619 Subdaemon reports creation failure
    16:39:24.72 Err : : 2568: TT14000: TimesTen daemon internal error: Deleting 3619/0x1558650/'/u01/ttdata/datastores/my_cachedb' - from association table - not found
    16:39:24.72 Err : : 2568: TT14004: TimesTen daemon creation failed: Could not del from dbByPid internal table
    16:39:24.81 Warn: : 2568: child process 3596 terminated with signal 11
    16:39:25.09 Err : : 2568: TT14000: TimesTen daemon internal error: daRecovery for 3619: No such data store '/u01/ttdata/datastores/my_cachedb'
    I've checked and the datastore does exist and is owned by the timesten UNIX user.
    ttversion:
    TimesTen Release 11.2.2.2.0 (64 bit Linux/x86_64) (tt1122:53396) 2011-12-23T09:26:28Z
    Instance admin: timesten
    Instance home directory: /home/timesten/TimesTen/tt1122
    Group owner: timesten
    Daemon home directory: /home/timesten/TimesTen/tt1122/info
    PL/SQL enabled.
    Datastore definition from sys.odbc.ini:
    [my_cachedb]
    Driver=/home/timesten/TimesTen/tt1122/lib/libtten.so
    DataStore=/u01/ttdata/datastores/my_cachedb
    LogDir=/u01/ttdata/logs
    PermSize=40
    TempSize=32
    DatabaseCharacterSet=AL32UTF8
    OracleNetServiceName=testdb
    Kernel parameters from sysctl -a:
    kernel.shmmax = 68719476736
    kernel.shmall = 4294967296
    Memory / SWAP:
    MemTotal: 2050784 kB
    SWAP: /dev/mapper/VolGroup00-LogVol01 partition 4095992
    I'm new to TimesTen and I'm planning on evaluationg it to see if it could solve an issue we're having. Any suggestions would be much appreciated.
    Thanks,
    Ian.

    Hi Ian,
    Can you please answer the following / provide the following information:
    1. What are your kernel parameters relating to semaphores set to? Is anything else on the mahcine using significant numbers of semaphores?
    2. Please provide the output of the following shell commands:
    ls -ld /u01
    ls -ld /u01/ttdata
    ls -ld /u01/ttdata/datastores
    ls -ld /u01/ttdata/logs
    3. Please provide an excerpt of the detailed message log (ttmesg.log) between around 16:38 and 16:40 (i.e. from a little while before the problem until after the problem).
    Thanks,
    Chris

  • Project Server 2010 - Update Scheduled Queue job failed with error message" Error no 131- AssignmentAlreadyExists"

    Hello,
    When I tried to update the schedule of the project , the job failed with the error message " AssignmentAlreadyExists". I further looked into the ULS logs but it wasn't that helpful.
    This is the message I found in ULS logs "type = UpdateScheduledProject failed at Message 5 and is blocking the correlation Errors: AssignmentAlreadyExists,Leaving Monitored Scope (FillTypedDataSet -- MSP_WEB_SP_QRY_ReadTimeSheetAssignmentsAndCustomFieldData).
    Execution Time=398.730006187937"
    Pplease help me to understand the root cause of this problem.
    I have verified all the assignments in the plan and I did not find any duplicates.
    Thanks in advance.
    Sesha

    This issue might need detailed troubleshooting, I would suggest you raise a support case with Microsoft.
    Cheers! Happy troubleshooting !!! Dinesh S. Rai - MSFT Enterprise Project Management Please click Mark As Answer; if a post solves your problem or Vote As Helpful if a post has been useful to you. This can be beneficial to other community members reading
    the thread.

  • External Data Refresh Failed. We cannot locate a server to load the workbook Data Model. ThisWorkBookDataModel

    Hi All,
    I have been trying to fix this for days now. I have tried solutions in many articles but to no avail. So while the error message is something you may have seen may times, I just can't find a solution in my case.
    This is the error:
    And in text just in case the image isn't viewable:
    "External Data Refresh Failed. We cannot locate a server to load the workbook Data Model. We were unable to refresh one or more data connections in this workbook. The following connections failed to refresh: ThisWorkBookDataModel."
    What is worse is I have checked the ULS (SharePoint Trace Logs), the Event Viewer Logs and the OWA Logs and I cannot find a specific error that would help pin point the problem.
    Excel Workbook
    So what am I doing? I have an Excel 2013 workbook and I create an "SQL Server" connection to the AdventureWorksDW database, add a pivot table and a pivot chart, test in in Excel and all works fine.
    I save the Excel workbook to SharePoint 2013 and then select "Data" then "Refresh All Connections" and then I get the error in the picture above.
    Even more puzzling is I have another Excel workbook that also has pivot tables and pivot charts in the AdventureWorksDW2012Multidimensional cube database in "SQL Analysis Services" and this works fine. Hmmm.
    My Environment
    My environment is Windows 2008 R2 Server, SharePoint 2013 with the April Service Pack1 and a separate server with OWA2013 SP1. It has an SQL Server 2008 R2 database which has been upgraded to SQL Server 2012.
    Data Model Settings
    In Excel Services this is set to my server name which is "server-name". As I do not have instances all I can enter is the server name. As this works everywhere else including the workbook outside of SharePoint I do not think this is the problem.
    But I could be wrong.
    Unattended Account
    I have set this up for the PowerPivot Services App and Excel Services App.
    ODC Connections in Excel
    I have tried all 3 authentication modes, Windows, Secure Store ID and "None" which is the unattended account. I have not tried the other connection types, should I?
    Not in WOPI
    I am not in WOPI mode.
    AD Accounts
    I have added permissions in the SharePoint Services and SQL Server, and as they work in Excel outside of SharePoint, I do not think it is a permissions issue. I could be wrong of course, but the problem is in one of SharePoint, OWA, AD,
    SQL Server, Excel, and Windows Server.
    Isolate the Error
    Below is a list of errors I think are relevant but they do not tell me much. The SharePoint logs are not really giving me an error that tells me what to do and where to do it, or even why it cannot refresh, (perhaps not noticeable by the untrained eye).
    Problem with SQL Server Not Analysis Services
    So my cube database in analysis services works fine in SharePoint/OWA but not the databases in sql server. This is my best clue but I have no idea what it means. Why would it work with an Analysis Services connection but not an "SQL Server" connection?
    It Works Outside of SharePoint
    If I run the excel worksheet outside of SharePoint all works fine. When inside OWA this is where the refresh error occurs.
    Errors from Event Viewer on SharePoint Server using ULS Viewer
    "Failed to create an external connection or execute a query. Provider message: There are no servers available or actively being initialized., ConnectionName: , Workbook:"
    "Refresh failed for 'ThisWorkbookDataModel' in the workbook 'http://server...'. [Session: 1.V22.26itT0lx8piNFeqtuGVhN214.5.en-US5.en-US36.98c0e158-9113-46e9-850e-edda81d9ed1c1.A1.N User: 0#.w|ad\testuser1]"
    And an error in the ULS under the "Data Model" category:
    "--> Check Deployment Mode (server-name): Fail (Expected: SharePoint, Actual: Multidimensional)."
    This last error, as it turned out, defined the problem concisely, although I was yet to work out what it meant in some detail.

    I finally solved this myself (or should I say with the help of several key articles).
    The refresh did not work because the database was not in "SharePoint Mode". Yes, SQL Server has modes, 3 of them in fact.
    If you installed SharePoint to the default SQL instance which would be called <servername> then you cannot use this default instance for Excel 2013 workbooks in OWA 2013 because the refresh only works if the database is in SharePoint mode.
    So what are these 3 modes? The Deployment Mode property in the msmdsrv.ini file has them as:
    0 = Multidimensional mode (the default whenever you install SQL Server normally)
    1 = PowerPivot for SharePoint mode
    2 = Tabular mode mode
    How do you know what mode you are in? That's easy, open SQL Studio Manager and connect to all your SQL database engine instances (ignore Analysis Services or SSRS as they are not database engines). If you only have the default instance then that is almost
    definately in Multidimensional mode which is the default and what SharePoint installs its databases to.
    You must have an instance called <servername>\POWERPIVOT. This instance is the "sharepoint mode" needed, and the default instance name when you install an SQL instance in this mode.
    If you don't see <servername>\POWERPIVOT in SQL server then you are not in "sharepoint mode". It is more accurate to say, you do not have an instance that is in sharepoint mode. This is because you cannot simply switch modes on an SQL server.
    You have to install a new instance in the required mode, thats the only way.
    That's easy enough. Load up the SQL Server setup CD and run setup. Install a brand new instance and select "SQL Server PowerPivot for SharePoint" when you get there in the wizard.
    Now you will have the default instance that stores all the SharePoint databases and that is in mode 0, and a new instance called <servername>\POWERPIVOT that is in mode 1. The "<servername>\POWERPIVOT" instance connection is what you
    will use for Excel 2013 when rendering in OWA 2013.
    You also need to ensure OWA 2013 is not in WOPI mode for Excel worksheets. See the last link below for more information about WOPI.
    Next you should go to the Excel Service App in CA and click Data Model Settings and add the <servername>\POWERPIVOT instance.
    Then you have to either turn off the firewall on the SQL server machine, or create an inbound rule on the Windows firewall to open the TCP port for the <servername>\POWERPIVOT instance:
    1. Start Task Manager and then click Services to get the PID of the MSOLAP$InstanceName.
    2. Run netstat –ao –p TCP from the command line to view the TCP port information for that PID.
    Finally, you can now create Excel 2013 workbooks that run in OWA without refresh errors, as long as you are connecting to the <servername>\POWERPIVOT instance. Hooray.
    REFERENCES
    Look for the string "There are no servers available or actively being initialized" in this article:
    http://blogs.msdn.com/b/analysisservices/archive/2012/08/02/verifying-the-excel-services-configuration-for-powerpivot-in-sharepoint-2013.aspx
    Determine the server mode:
    http://msdn.microsoft.com/en-au/library/gg471594(v=sql.110).aspx
    Install the SharePoint PowerPivot instance (aka SharePoint mode)
    http://msdn.microsoft.com/en-au/library/eec38696-5e26-46fa-bc83-aa776f470ce8(v=sql.110)
    Open the port for the new SQL instance:
    http://msdn.microsoft.com/en-us/library/ms174937(v=sql.110).aspx
    Turn Off WOPI for Excel OWA
    http://blogs.technet.com/b/excel_services__powerpivot_for_sharepoint_support_blog/archive/2013/01/31/powerpivot-for-sharepoint-browser-refresh-fails-data-refresh-not-supported-in-office-web-apps.aspx

  • Project Server 2013 - External Data Refresh Failed

    Hi, 
    I have built  new Project Server 2013 in a 3 tier farm, with OWA, configured PWA site everything working fine except, Reports in BI Center, where i am getting - External Data Refresh Failed error
    http://Sp2013/PWA/ProjectBICenter/Sample%20Reports/Forms/AllItems.aspx
    i have stopped using Excel Web also to use Excel Services, get below error
    External Data Refresh Failed
    An error occurred while accessing application id ProjectServerApplication from Secure Store Service. The following connections failed to refresh:
    Project Server - Issue Data
    Project Server - Risk Data
    Learn
    more about data refresh
    Steps Tried:
    1.deleted Excel Services - recreated
    2.Deleted Secure store - recreated
    Built  test box without OWA still have the same issue
    Any suggestions/ Solutions greatly appreciated.
    Thanks 
    Aleem
    Alim

    Hi,
    If you are still facing the same issue follow the below steps:
    1) Access PWA from your server
    2) Download any built in report from Reports section (in your case
    Project Server - Issue Data Project
    Server - Risk Data) to your server.
    3) Open report in excel, go to Data->Connections in Workbook Connection dialog open "Project Server - Issue Data" go to properties.
    4) Here select tab 'Definition' under connection file remove tick for 'Always use connection file'
    5) Tweak your connection string & tick save password option 
    6) Excel Services: Authentication Settings (suit yourself)
    Now, save & publish on PWA, hope it works :)

  • "External Data Refresh Failed" in Excel Workbook in SharePoint 2013

      Hi Everyone,
    I hope someone can help me sort this out. I wasn't sure if this should be submitted to SQL Server 2012 or SharePoint 2013 but I think I have ruled out an actual SQL issue here.
    I have a test SharePoint 2013 Farm installed on one server.  On that server we have SQL Server 2012 Standard (Database Engine Only) installed as one instance for SharePoint 2013.  SharePoint 2013 is installed as Enterprise. 
    On another server I have SQL Server 2012 Enterprise installed I have both the Database Engine and Analysis Services installed on that server in one Instance.
    I have created a new Analysis Services Cube and stored in on that server using VS 2012 Data Tools.   I then use Office 2013 Excel and create a new Pivot Table and bring in that data into that workbook.  I save that workbook to my desktop and
    can click on the DATA tab and do a "Refresh All" and the data is refreshed with no problem. 
    I then Save that Workbook to SharePoint 2013 in my Business Center documents library. I then try to do a Data Refresh and get the following error: External Data Refresh Failed. We were unable to refresh one or more data connections in the workbook. The following
    connections failed to refresh: servername ConnectionName.
    Let me tell you a little bit about how I have my SharePoint 2013 setup.
    In Application Management ==> Secure Store Service - I have a Target Application Id setup with an AD account name and password.
    In Application Management ==> Excel Services Application Settings - I an using an Unattended Service Account that is pointing to the Target Application ID that I setup in Secure Store Service.
    Now in SQL Server 2012. I have made the AD account name and password a SYSADMIN account (because nothing seemed to work) and mapped that login to my Database that I created my cube from.
    I have went into the Properties of my Analysis Server on SQL Server 2012 and out that AD userid in the Security Page as a Server Administrator. 
    Now let me tell you the real kicker here. 
    I also have a Sharepoint 2010 farm that we have been testing for about a year.  This is something we wanted to go into production with but now have decided we will deploy our new production site on 2013. 
    This has the same setup only I am using SQL Server 2008 R2 as my database for SharePoint 2010, and am using SQL Server 2008 R2 on another server and instance for Database Engines and Analysis Server. 
    I have already created a cube on the SQL 2008 r2 Analysis Server using BIDS.
    I created a Excel Pivot Table based on that cube and save it to the SharePoint 2013 site.  
    I then put the SSS AD account name on the SQL Server 2008 Analysis Services Security as Administrator.
    I pull up that Excel Workbook and go into Data and do a Refresh All and it works. 
    I then bring up my copy of the Excel workbook I created for my SQL Server 2012 Cube (this does not work on SP 2013) and save it to my Business Center in SP 2010.  This will do a DATA Refresh on that workbook.
    Another thing that I have done, (we have Office Web Apps Server running on the SP 2013 side), is set SPWOPISuppressSetting -view -extension xlsx so that when I bring up the Excel spreadsheet in 2013 it has a url with the xlviewer instead of the WOPI 
    url.
    OK.  I have went through all of my setup.   I am must wondering if I missed something in my install for this.  HELP.   There can't be that much difference in the installation between 2010 and 2013. 
    Thanks in Advance everyone.

    After many hours of research and totally burning down my SQL Server 2012 server because of the many changes I made to it, I finally found the issue that caused this and specifically why SP would not work with SQL Server 2012 Analysis Services.  This
    time I reinstalled SQL Server 2012 from scratch again but only put SP1 on at this time.  I can't be completely sure this fixed the issue since I have not put any of the other CU  on the box.  But, after the reinstall, everything seemed to work
    perfectly from the Excel Services side.
    The issue I then ran into was that I could not get my PerformancePoint Services to work with my Cubes that were created on SQL Server 2012.  I received Error Message " The DataSource Provider for data sources of type 'ADOMD.NET' is not registered" on
    my SharePoint 2013 Server Event Viewer. 
    I finally found this post on the issue: 
    http://yossidahan.wordpress.com/2012/08/14/cant-get-ssas-databases-to-appear-in-performance-point-dashboard-designer-check-you-adomd-net-version/
    Seems that SP 2013 is built to use SQL Server 2008 R2 Analysis Services, so you need to install the ADOMD.net from SQL Server 2008 in order for it to work.   But make sure you install Version 10 since if you install any other it doesn't seem to
    work. 
    I feel like I wasted a whole month tracking these issues down, and I haven't been able to test PowerView, PowerPivot or SSRS yet.  I hope there is not any more. 

  • Windows Server 2012 Windows Backup failed with following error code '0x8078006B' (Windows Backup failed to create the shared protection point on the source volumes.).

    The Volume Shadow Copy service initially was running under the context of System, so we thought that ‘System’ doesn’t have permissions to access network shares. 
    When the Volume Shadow Copy service was running under the context of System, this was the error logged:
    “failed with following error code '0x8078014B' (There was a failure in creating a directory on the backup storage location.).”
    Which is likely due to not having permissions to write to the network location. 
     This is a scheduled backup trying to write to a network location, so we changes the service to run under the context of an account that does have permissions to write to the network share.
      Then the error changed to this:
    “failed with following error code '0x8078006B' (Windows Backup failed to create the shared protection point on the source volumes.).”
    HRESULT 0x8078006b
    DetailedHRESULT 0x8004230f
    ErrorMessage %%2155348075
    BackupState 12
    VolumesInfo <VolumeInfo><VolumeInfoItem Name="C:" OriginalAccessPath="C:" State="15" HResult="-2139619228" DetailedHResult="0" PreviousState="0" IsCritical="1" IsIncremental="0"
    BlockLevel="0" HasFiles="1" HasSystemState="0" IsCompacted="0" IsPruned="0" IsRecreateVhd="0" FullBackupReason="0" DataTransferred="0" NumUnreadableBytes="0" TotalSize="0"
    TotalNoOfFiles="0" Flags="1604" BackupTypeDetermined="0" SSBTotalNoOfFiles="0" SSBTotalSizeOnDisk="0" /><VolumeInfoItem Name="D:" OriginalAccessPath="D:" State="15" HResult="-2139619228"
    DetailedHResult="0" PreviousState="0" IsCritical="0" IsIncremental="0" BlockLevel="0" HasFiles="1" HasSystemState="0" IsCompacted="0" IsPruned="0" IsRecreateVhd="0"
    FullBackupReason="0" DataTransferred="0" NumUnreadableBytes="0" TotalSize="0" TotalNoOfFiles="0" Flags="68" BackupTypeDetermined="0" SSBTotalNoOfFiles="0" SSBTotalSizeOnDisk="0"
    /></VolumeInfo>
    We aren’t really seeing anything that gives any hint on what the issue is. 
    Any ideas?  Thanks in advance!

    We are trying to back up folders/files from 2 local drives (C: & D:), both of which have only 10% space used, and 100 GB free. 
    We are attempting to back the files up to a Remote Shared File (and there is 100+ GB free space out there). 
      If we try another network location, we receive the exact same error. 
     This is Windows Server 2012, not running Hyper-V and is a physical server not a VM.
    Thank you for the link. 
    Looking in: 
    Event Viewer / Application and Service Logs / Microsoft / Windows / Backup / Operational
    But it doesn’t seem to give any more details: 
    Log Name:     
    Microsoft-Windows-Backup
    Source:       
    Microsoft-Windows-Backup
    Date:         
    7/8/2013 8:00:12 PM
    Event ID:     
    5
    Task Category: None
    Level:        
    Error
    Keywords:     
    User:         
    SYSTEM
    Computer:     
    servername.edu
    Description:
    The backup operation that started at '‎2013‎-‎07‎-‎09T02:00:06.273000000Z' has failed with following error code '0x8078006B' (Windows Backup failed to create the shared protection point on the source volumes.).
    Please review the event details for a solution, and then rerun the backup operation once the issue is resolved.
    Event Xml:
    <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
      <System>
    <Provider Name="Microsoft-Windows-Backup" Guid="{1DB28F2E-8F80-4027-8C5A-A11F7F10F62D}" />
    <EventID>5</EventID>
    <Version>3</Version>
    <Level>2</Level>
    <Task>0</Task>
    <Opcode>0</Opcode>
    <Keywords>0x4000000000000000</Keywords>
    <TimeCreated SystemTime="2013-07-09T02:00:12.872602100Z" />
    <EventRecordID>30</EventRecordID>
    <Correlation />
    <Execution ProcessID="3028" ThreadID="3996" />
    <Channel>Microsoft-Windows-Backup</Channel>
    <Computer>servername.edu</Computer>
    <Security UserID="S-1-5-18" />
      </System>
      <EventData>
    <Data Name="BackupTemplateID">{A421E864-A115-4288-8D12-F4878CF8A248}</Data>
    <Data Name="HRESULT">0x8078006b</Data>
    <Data Name="DetailedHRESULT">0x8004230f</Data>
    <Data Name="ErrorMessage">%%2155348075</Data>
    <Data Name="BackupState">12</Data>
    <Data Name="BackupTime">2013-07-09T02:00:06.273000000Z</Data>
    <Data Name="BackupTarget">\\servername\BACKUP</Data>
    <Data Name="NumOfVolumes">2</Data>
    <Data Name="VolumesInfo">&lt;VolumeInfo&gt;&lt;VolumeInfoItem Name="C:" OriginalAccessPath="C:" State="15" HResult="-2139619228" DetailedHResult="0" PreviousState="0" IsCritical="1" IsIncremental="0" BlockLevel="0" HasFiles="1" HasSystemState="0"
    IsCompacted="0" IsPruned="0" IsRecreateVhd="0" FullBackupReason="0" DataTransferred="0" NumUnreadableBytes="0" TotalSize="0" TotalNoOfFiles="0" Flags="1604" BackupTypeDetermined="0" SSBTotalNoOfFiles="0" SSBTotalSizeOnDisk="0" /&gt;&lt;VolumeInfoItem
    Name="D:" OriginalAccessPath="D:" State="15" HResult="-2139619228" DetailedHResult="0" PreviousState="0" IsCritical="0" IsIncremental="0" BlockLevel="0" HasFiles="1" HasSystemState="0" IsCompacted="0" IsPruned="0" IsRecreateVhd="0" FullBackupReason="0" DataTransferred="0"
    NumUnreadableBytes="0" TotalSize="0" TotalNoOfFiles="0" Flags="68" BackupTypeDetermined="0" SSBTotalNoOfFiles="0" SSBTotalSizeOnDisk="0" /&gt;&lt;/VolumeInfo&gt;</Data>
    <Data Name="SourceSnapStartTime">2013-07-09T02:00:06.289250300Z</Data>
    <Data Name="SourceSnapEndTime">1601-01-01T00:00:00.000000000Z</Data>
    <Data Name="PrepareBackupStartTime">&lt;TimesList&gt;&lt;Time Time="1601-01-01T00:00:00.000Z" /&gt;&lt;Time Time="1601-01-01T00:00:00.000Z" /&gt;&lt;/TimesList&gt;</Data>
    <Data Name="PrepareBackupEndTime">&lt;TimesList&gt;&lt;Time Time="1601-01-01T00:00:00.000Z" /&gt;&lt;Time Time="1601-01-01T00:00:00.000Z" /&gt;&lt;/TimesList&gt;</Data>
    <Data Name="BackupWriteStartTime">&lt;TimesList&gt;&lt;Time Time="1601-01-01T00:00:00.000Z" /&gt;&lt;Time Time="1601-01-01T00:00:00.000Z" /&gt;&lt;/TimesList&gt;</Data>
    <Data Name="BackupWriteEndTime">&lt;TimesList&gt;&lt;Time Time="1601-01-01T00:00:00.000Z" /&gt;&lt;Time Time="1601-01-01T00:00:00.000Z" /&gt;&lt;/TimesList&gt;</Data>
    <Data Name="TargetSnapStartTime">1601-01-01T00:00:00.000000000Z</Data>
    <Data Name="TargetSnapEndTime">1601-01-01T00:00:00.000000000Z</Data>
    <Data Name="DVDFormatStartTime">&lt;TimesList&gt;&lt;/TimesList&gt;</Data>
    <Data Name="DVDFormatEndTime">&lt;TimesList&gt;&lt;/TimesList&gt;</Data>
    <Data Name="MediaVerifyStartTime">&lt;TimesList&gt;&lt;/TimesList&gt;</Data>
    <Data Name="MediaVerifyEndTime">&lt;TimesList&gt;&lt;/TimesList&gt;</Data>
    <Data Name="BackupPreviousState">2</Data>
    <Data Name="ComponentStatus">&lt;ComponentStatus&gt;&lt;/ComponentStatus&gt;</Data>
    <Data Name="ComponentInfo">&lt;ComponentInfo&gt;&lt;/ComponentInfo&gt;</Data>
    <Data Name="SSBEnumerateStartTime">1601-01-01T00:00:00.000000000Z</Data>
    <Data Name="SSBEnumerateEndTime">1601-01-01T00:00:00.000000000Z</Data>
    <Data Name="SSBVhdCreationStartTime">1601-01-01T00:00:00.000000000Z</Data>
    <Data Name="SSBVhdCreationEndTime">1601-01-01T00:00:00.000000000Z</Data>
    <Data Name="SSBBackupStartTime">1601-01-01T00:00:00.000000000Z</Data>
    <Data Name="SSBBackupEndTime">1601-01-01T00:00:00.000000000Z</Data>
    <Data Name="SystemStateBackup">&lt;SystemState IsPresent="0" HResult="0" DetailedHResult="0" /&gt;</Data>
    <Data Name="BMR">&lt;BMR IsPresent="0" HResult="0" DetailedHResult="0" /&gt;</Data>
    <Data Name="VssFullBackup">false</Data>
    <Data Name="UserInputBMR">false</Data>
    <Data Name="UserInputSSB">false</Data>
    <Data Name="BackupSuccessLogPath">
    </Data>
    <Data Name="BackupFailureLogPath">
    </Data>
    <Data Name="EnumerateBackupStartTime">&lt;TimesList&gt;&lt;Time Time="1601-01-01T00:00:00.000Z" /&gt;&lt;Time Time="1601-01-01T00:00:00.000Z" /&gt;&lt;/TimesList&gt;</Data>
    <Data Name="EnumerateBackupEndTime">&lt;TimesList&gt;&lt;Time Time="1601-01-01T00:00:00.000Z" /&gt;&lt;Time Time="1601-01-01T00:00:00.000Z" /&gt;&lt;/TimesList&gt;</Data>
    <Data Name="PruneBackupStartTime">&lt;TimesList&gt;&lt;Time Time="1601-01-01T00:00:00.000Z" /&gt;&lt;Time Time="1601-01-01T00:00:00.000Z" /&gt;&lt;/TimesList&gt;</Data>
    <Data Name="PruneBackupEndTime">&lt;TimesList&gt;&lt;Time Time="1601-01-01T00:00:00.000Z" /&gt;&lt;Time Time="1601-01-01T00:00:00.000Z" /&gt;&lt;/TimesList&gt;</Data>
    <Data Name="BackupFlags">0x9</Data>
    <Data Name="ComponentInfoSummary">&lt;ComponentInfoSummary ComponentInfoArrayPresent="1" TotalComponents="0" SucceededComponents="0" /&gt;</Data>
      </EventData>
    </Event>

  • SAP BW Query failing with ORA ERROR

    Hi There
    We have SAP BW 3.5 on AIX 5.3 TL10. Oracle is 9i (9.2.0.7). Our users are facing problems with Cost calculation job on cubes.
    The job got failed with below error
    Database error 1115 at FET access to table /BI0/03538
    > ORA-01115: IO error reading block from file 48 (block #
    > 1144987)#ORA-01110: data file 48:
    > '/oracle//sapdata11/.data35.dbf'#ORA-27091: skgfqio:
    > unable to queue I/O#ORA-27072: skgfdisp: I/O error#IBM AIX
    > RISC System/6000 Error: 5: I/O error
    Database error text........: "ORA-01115: IO error reading block from file 2
    (block # 214041)#ORA-01110: data file 2:
    '/oracle//sapdata4/_1/.data1'#ORA-27091: skgfqio: unable to queue
    I/O#ORA-27072: skgfdisp: I/O error#IBM AIX RISC System/6000 Error: 5: I/O
    error"
    Internal call code.........: "[RSQL/READ/RSSDBATCH ]"
    More Info:
    Checked alert log, no errors in alert log. checked datafiles status and everything is perfect!! The errors are appearing in SAP application logs only
    Is it due to any patches missing on AIX/oracle?
    Any idea guys? Looks like some oracle/AIX error?

    Hello Shrinath,
    please post the following content.
    shell> aioo -a
    Maybe you have reached the maximum number of asynchronous I/O requests.
    For example in our SAP environments the following parameters are set
    shell> aioo -a
                   minservers = 8
                   maxservers = 400
                   maxreqs = 1228
                   fsfastpath = 1
    Regards
    Stefan

  • Dimension refresh fails with ORA-34034

    Hi All,
    We are currently experiencing a strange behaviour when we refresh one of our dimensions.
    One of our dimensions is refreshed daily and has been working fine until yesterday when it fails with this error:
    <ERROR>
    <![CDATA[
    XOQ-01601: error while loading data for Cube Dimension "OLAPUSER.PROD" into the analytic workspace
    ORA-34034: 742 is already a value of SALES!PROD_CLASS_SURROGATE.
    XOQ-01600: OLAP DML error while executing DML "SYS.AWXML!R11_LOAD_DIM"]]>>
    </ERROR>
    The PROD dimension is a balanced hierarchy and we've checked theren't any duplicate keys at every level of the hierarchy.
    Any idea?
    Thanks

    I have seen this (in 11.1.0.7, I believe) when the metadata cache ("kgl") gets out of synch with the data dictionary. Specifically there is a flag that determines if prefixes get added to dimension members ("use surrogates" in AWM terms) that becomes false instead of true. If you look at the generated SQL in the OUTPUT column of the CUBE_BUILD_LOG you may find that sometimes a prefix is added to dimension members (e.g. "LEAF_LEVEL_" || dim_table.leaf_column) and othertimes it is not (e.g. just dim_table.leaf_column). A workaround if this is the case may be to execute the following (as dba) before building the dimension.
    alter system flush shared_pool;

  • External Drive Disk Erase Failed with the Error Input/Output Error

    I have 2 hard drives in an external FW800 enclosure that I am unable to format. When I go to initialize the drives in Disk Utility, I get the following error message: "Disk Erase failed with the error: Input/output error."
    The drives show up in Disk Utility, but I can't repair them (that option is grayed out). Disk Utility correctly ID's the manufacturer of the drives (Maxtor), their size (200gb each), so it's obviously seeing that the drives are there. But it won't let me format them.
    The drives are new, by the way; they don't have any data/files on them. I have Disk Warrior, but the drives don't show up there to be repaired -- probably because they aren't formatted yet.
    After looking at other posts, I tried switching the jumper settings around on the drives -- from Master/Slave to cable select and back again, but it didn't help. I also tried doing a zero erase (even though the drives are new), zapping the PRAM -- again, no help.
    One question I had is whether this could be a bad FW800 cable? The cable is new -- it came with the enclosure, which is an OWC Dual FW 800 enclosure. Other than that, does anyone have any other thoughts about what's causing this? Any help would be greatly appreciated.
    Matthew

    SOLUTION!!!!
    I had the exact same problem. I have the original 20 GB hard drive that came in my Powerbook G4 550MHz and a couple of years ago I traded up for a 60 GB drive and bought a FW/USB enclosure for my original drive to use it to backup my important files. I hadn't backed up in over a year (shame on me!) and I decided maybe I should erase the drive and start from scratch. It was connected via USB.
    At that point DiskUtility gave me the exact same Input/Output error. I tried partitioning the drive into 1 or more partitions but came up with the same error. I couldn't figure out what was wrong so I decided to startup in OS 9.2.2, I did that and let it start up, then plugged in the hard drive and it gave me the standard "This disk is unrecognizable, do you want to eject or erase?" so I clicked Initalize. It worked!
    Just make sure you choose the MacOS Extended option when initializing out of OS 9 (instead of the MacOS Standard option) so it can be read and viewed in OS X.
    If your computer is too new to be able to boot from an OS 9 folder on your drive or an OS 9 CD, then see if a friend or a local library has older computers that are running OS 9 or can boot from it. If not let me know and you can send me your drive and I'll reformat it.
    Kind of crazy...I haven't used the OS 9 partition on my HD in YEARS...was even thinking about erasing it since I don't use any Classic applications anymore...good thing I didn't!
    Nick
    Powerbook G4 550Mhz   Mac OS X (10.4.6)  

  • Is my HD Dead? Reformat Disk Utility Error: secure disk erase failed with the error could not open disk.

    Hi,
    Fed up with seeing the spinning beach ball I decided to reformat my MacBook Pro...
    After backing up everything on an external hard drive I put in the OSX install DVD, restarted the machine and held down 'C'.
    I followed the install prcedure, clicking next a few times etc...
    I then went into Utilities > Disk Utility. I chose 7-Pass to erase the Macintosh HD and set it off erasing.
    I checked the process an hour in and message on screen read:
    Secure disk erase failed with the error:
    could not open disk
    The internal hard drive no longer exists in the disk utility so I cant retry erasing it.
    The only thing that appears in disk utility is the OSX install DVD.
    I can't even shut down the mac as everything under the apple tab is greyed out!
    I'm guessing this means my hard drive is broken right?
    If anyone has any other ideas of what to try I'd really appreciate that.
    How do I turn the machine off?
    If my hard drive is gone then should I consider getting an SSD drive?
    Any recommendations for such a drive would be great.
    Hope you can help!

    Did you partition the drive?
    Extended Hard Drive Preparation
    1. Open Disk Utility in your Utilities folder. If you need to reformat your startup volume, then you must boot from your OS X Installer Disc. After the installer loads select your language and click on the Continue button. When the menu bar appears select Disk Utility from the Installer menu (Utilities menu for Tiger or Leopard.)
    2. After DU loads select your hard drive (this is the entry with the mfgr.'s ID and size) from the left side list. Note the SMART status of the drive in DU's status area. If it does not say "Verified" then the drive is failing or has failed and will need replacing. SMART info will not be reported on external drives. Otherwise, click on the Partition tab in the DU main window.
    3. Click on the Options button, set the partition scheme to GUID (only required for Intel Macs) then click on the OK button. Set the number of partitions from the dropdown menu (use 1 partition unless you wish to make more.) Set the format type to Mac OS Extended (Journaled.) Click on the Partition button and wait until the volume(s) mount on the Desktop.
    4. Select the volume you just created (this is the sub-entry under the drive entry) from the left side list. Click on the Erase tab in the DU main window.
    5. Set the format type to Mac OS Extended (Journaled.) Click on the Options button, check the button for Zero Data and click on OK to return to the Erase window.
    6. Click on the Erase button. The format process can take up to several hours depending upon the drive size.
    Steps 4-6 are optional but should be used on a drive that has never been formatted before, if the format type is not Mac OS Extended, if the partition scheme has been changed, or if a different operating system (not OS X) has been installed on the drive.

  • SharePoint SiteMialbox failed with 503 error (AutoDiscover.svc web service call failed)

    SharePoint SiteMialbox failed with 503 error (AutoDiscover.svc web service call failed)
    I followed Technet articles to configure SiteMailBoxes in our environment & exchange sever.
    When we created Sitemailbox in a SiteCollection &when we try to open it, it failed with below error.
    Site Mailbox
    We are having trouble connecting to Exchange Server
    The server might be temporarily unavailable. Please check back on this page in a few minutes. If this problem persists, please contact your system administrator.
    Correlation ID: bb0fe99c-6f4e-e084-b191-881fbf0fa977, Error Code 10 
    ULS Log (503 error)
    Autodiscover Diagnostics Response Headers: request-id: 95d12ceb-283e-4495-b28b-256503fd097c  client-request-id: 742fe69c-ef5a-e084-ca05-6098c759c584  X-CalculatedBETarget: devapwxyz01a.devap.mydomain.com  X-FEServer: DEVNAABCD01B
     Content-Length: 0  Cache-Control: private  Date: Tue, 03 Feb 2015 18:53:40 GMT  Set-Cookie: X-BackEndCookie=; expires=Sun, 03-Feb-1985 18:53:40 GMT; path=/autodiscover; secure; HttpOnly  Server: Microsoft-IIS/8.5  X-AspNet-Version:
    4.0.30319  X-Powered-By: ASP.NET    
    742fe69c-ef5a-e084-ca05-6098c759c584
    if I am correct, X-CalculatedBETarget supposed to be DEVNAABCD01B.devna.mydomain.com but it connected to different domain devapwxyz01a.devap.mydomain.com.  Do you guys have any idea on this?  (I verified
    the same using fiddler, it is failing right at autodiscover.svc call.)
    I wrote a powershell script to connect autodiscover service in sharepoint server & this web service call able connect right server X-CalculatedBETarget. It gave the expected response.
    I am not sure why SharePoint webservice call (X-CalculatedBETarget) is going to different server?
    let me know if you guys have any ideas.
    Thanks.

    Thanks for the Response Raj.
    I already followed the same instructions in the Links.
    When SharePoint Autodisover.svc webservice send a request to Exchange server & Exchange server redirecting that request to different server, this is the problem i am facing right now.
    X-CalculatedBETarget
    supposed to be DEVNAABCD01B.devna.mydomain.com but it connected to different domain devapwxyz01a.devap.mydomain.com.
    Let me know if you have any suggestions?

Maybe you are looking for