Loading & converting geographical data to/in hana

Current situation:
source system: MS SQL Server 2008 containing spatial data
SAP DS is our core loading technique
SAP HANA our target.
In HANA there is the column type ST_GEOMETRY or ST_POINT that can hold the geographical specific data. There are also functions how to use them (ST_Distance for example)...... but the main question is: how can i load this kind of data to HANA?
Within SAP DS there is no special column type like ST_GEOMETRY:
When trying to use blob (binary like spatial data) the DS failes. So my main question is, how can i correctly load geographical data from a non SAP system?

Hi Henk,
Officially only loading spatial data from HANA and Oracle is certified with Data Services 4.2.  When loading from Oracle, Data Services is generating a Sql statement that contains a function to convert the Oracle spatial field to WKT. See http://en.wikipedia.org/wiki/Well-known_text
You could try to build a view in Sql Server that does the same. The geometry field in HANA is recognized by Data Services as a long (Hana sps7) or blob (Hana sps8), so you need a query statement that converts the output of your spatial field from the view into the appropriate data type in Hana. I'm not sure if its correctly loaded into Hana, but its the best you can do with the latest Data Services version.
Another option would be to load Esri shape files into Hana, but thats a manual operation.
Best regards,
Stefan Koster

Similar Messages

  • Authorization Issue while Data Preview from HANA View

    Hi Experts,
    We are using BW on HANA. We have created DSOs (info provider) in BW and generated HANA views from there. We have also created analysis authorizations in BW for authorization relevant characteristics. In HANA, we are able to go to the generated analytic view and preview the data from it successfully.
    Now I have created a test user and assigned a custom role with below authorizations to this user in HANA:
    - bw2hana/../REPORTING role (this role is automatically created by activation of DSO in BW).
    - Roles MODELING, MONITORING, CONTENT_ADMIN, USER.
    - Multiple system privileges although not needed, like REPO.EXPORT, REPO.IMPORT, etc.
    - Analytic Privilege  _SYS_BI_CP_ALL
    - Package Privilege: REPO.READ for all required packages (tried with ROOT package also).
    In BW system also, the test user has analysis authorizations providing access to the relevant info objects.
    But when I am trying to preview data for HANA view, I am getting attached error (also listed below):
    "Cannot get the data provider outline
    SAP DBTech JDBC: [2048]: Column store error: Search table error: [2950] user is not authorized"
    I tried to trace the situation is HANA and got below details in 2 trace files:
    indexserver_alert_saphana.trc:
    [6433]{416977}[66/-1] 2014-10-14 00:59:27.541187 e CalcEngine       ceAuthorizationCheck.cpp(02365) : AuthorizationCheckHandler::addAPsToSearchObject: Error during converting SqlAPs to Query entries
    indexserver_saphana.31003.075.trc
    [6433]{416977}[66/-1] 2014-10-14 00:59:27.541197 i TraceContext     TraceContext.cpp(00702) : UserName=TEST_SSO, ApplicationUserName=<<computer name >>, ApplicationName=HDBStudio, ApplicationSource=csns.modeler.datapreview.providers.ResultSetDelegationDataProvider.<init>(ResultSetDelegationDataProvider.java:118);csns.modeler.actions.DataPreviewDelegationAction.getDataProvider(DataPreviewDelegationAction.java:278);csns.modeler.actions.DataPreviewDelegationAction.run(DataPreviewDelegationAction.java:242);csns.modeler.actions.DataPreviewDelegationAction.run(DataPreviewDelegationAction.java:127);csns.modeler.command.handlers.DataPreviewHandler.execute(DataPreviewHandler.java:53);org.eclipse.core.commands
    [6433]{416977}[66/-1] 2014-10-14 00:59:27.541187 e CalcEngine       ceAuthorizationCheck.cpp(02365) : AuthorizationCheckHandler::addAPsToSearchObject: Error during converting SqlAPs to Query entries
    Do you know what this "Error during converting SqlAPs to Query entries" actually means"? How can we resolve this issue? The authorization is working properly for our user ids. But we need to provide restricted access for business users so trying to create test user and custom role.
    Thanks
    Nitesh Gupta

    Hi Pinaki and Prabhith,
    Yes, my issue was resolved. Sorry, missed to updated here.
    I was just a beginer for BW on HANA Security at that time and didn't know many small things. The solution was pretty simple.
    Whenever you assign analysis authorizations to a user in BW, you also need to generate corresponding HANA authorization. This is done through tcode RS2HANA_CHECK tcode. This tcode converts  BW analysis authorizations into HANA analysis authorizations and assign to the HANA user. You can see generated HANA authorization table RS2HANA_AUTH_STR in both BW and HANA.
    Once the HANA authorizations are successfully generated for a user, he should be able to see data from Views.
    Let me know if this solves issues. Then I will close this thread.
    Thanks

  • Problem in loading 0calday infoobject date format through Flatfile in cube

    Hi All,
    I am facing problem  in loading the 0calday infoobject  data through flat file( format - test.csv) in infocube..
    Suppose consider we are having two flat files(test1.csv,test2.csv).
    1.First file(test1.csv) has a proper date format (ie YYYYMMDD), while loading it is succesfully .
    2.Second file(test2.csv) has improper date format(ie DDMMYYYY), loading fails because of this format..
    Is it possible to write the Routine(Start Routine) in the Infopackage (External data-Tab) in such a way the if the flat file(test1.csv) is proper date format load calday data without any conversion, if the file is test2.csv convert the date field format from (DDMMYYYY) to (YYYYMMDD) and finally laod data in cube.
    With regards,
    Hari.
    +91 9323839017

    Hello Dinesh,Anil
    There is no distinguishing field between the two flat file loads.
    We are using only one infoobject(ie 0calday) for two loads.
    We are using two external source system(one system generate file as YYYYMMDD date format,and another system generate date formate as DDMMYYY, here two file names are unique.
    Here my requirement is i have to compare two file names using start routine of the package (tab:External data) .
    if(test1.csv)
    load as it is 0calday data (since it is in proper format YYYYMMDD)
    else if(test2.csv)
    then convert from DDMMYYYY to YYYYMMDD and load data to 0calday infoobject in cube.
    Is it possible to compare two files names using start routine.
    with regards,
    Hari

  • How to load infotype 2010 data in to R/3?

    Hi,
    I posted the same question earlier but with a wrong subject line.
    My question is how can I load infotype 2010 data in to ECC 6.0? I am using PI to convert a flat file, but depending on the feature available on R/3 side(either IDOC, BAPI, or a flat file) I am going to convert to the required format.
    Can anyone tell me what is the best way to push this data in to R/3.
    Appreciate your answers.
    Thanks
    Raju

    Sorry My Bad.
    Use this BAPI
    BAPI_PTMGREXTREMSPEC_INSERT and below id the description of the BAPI.
    This method stores the data records in the interface tables PTEX2010 and PTEXDIR. Then, the report RPTEXTPT (External Transfer -> Time Management reads the data from PTEX2010 and stores it in the Employee Remuneration Info infotype 2010. Neither cost assignment nor activity allocation is supported with this method.
    You can try these as well.
    BAPI_PTMGREXTREMSPEC_INSWACT Insert EE remuneration info in table (with activity allocation)
    BAPI_PTMGREXTREMSPEC_INSWCOST Insert external EE remuneration info in table (w/ cost assignment)

  • Problem loading PostgreSQL Bytea data type to Oracle Raw data type

    We are migrating our database from PostgreSQL to Oracle. First, we convert the BYTEA data type in PostgreSQL to Oracle RAW. The BYTEA data type is variable bytes array. How can we load the BYTEA data type to Oracle RAW data type? Or I have to convert to different data type. thanks.
    Peter,

    hi,
    Instead of 'interval day to second' in method declaration use internal datatype 'DSINTERVAL_UNCONSTRAINED'.
    There are more unconstrained types in oracle.
    Bartek

  • Loading leap year date using SQL*Loader

    Hello,
    I have a problem loading a date '29/02/2000' using SQL*Loader. This date is on a leap year. I'm getting an error message from SQL*Loader as 'ORA-01839: date not valid for month specified'. My colleague and I have tried using various date functions to convert the data into date, but no luck.
    I would appreciate any helps,
    Bruce

    Thanks for your help, I found the bug on my control file. I was using the RTRIM function to remove bad timestamp such as '29/02/2000 0:00:00'. So instead of using this statement:
    LOG_DATE DATE "DD/MM/RRRR" "RTRIM(:LOG_DATE,'0:00:00')"
    I was using the statement below with a space before the '0:00:00' string literal, with the intention to remove a space also:
    LOG_DATE DATE "DD/MM/RRRR" "RTRIM(:LOG_DATE,' 0:00:00')"
    Well, it turned out that if there was a space before the string literal, RTRIM function would trim the matching string plus any '0' characters from the right, including the '000' that belongs to '2000'. Thus, the error.
    Thanks again,
    Bruce

  • Problems query data after data migration to Hana

    Hi We currently have 2 BW environments: Release: 701 SP-Level: 0014 Support Package: SAPKW70114 DB: No Hana Release: 740 SP-Level: 0005 Support Package: SAPKW74005 DB: Hana It was necessary to migrate data cubes BW 701 -> BW 740, because in ECC only history of the last 18 months is maintained. After this migration, when I make a query in Query Designer and I use between in "Filters and Restrictions" no returned values​​, but if I include the value and not use the between the values are displayed. For example: I created a query with: one key figure, Comp_Code, Material and Fiscper. On filters and restrictions I informed that should return only the materials between 0001 and 0003. When running the query does not return values​​, but if I change the filter to only return materials 0001, 0002 and 0003, the query returns data. Note .: This problem occurs only with the migrated data from the BW 701, loads of ECC deltas are functioning normally. Could you help me solve this issue? Best Regards

    Important point: I found that if you run the query in RSRT "Execute + Debug" button and do not use the option "Do not use SAP HANA / BWA Index (0)" the query runs without problems.
    Additional Information:
    - BW on Hana environment 740 is a new installation.
    - To load the historical data for the BW on Hana was created an RFC with the BW 701, classic BW as source system to new BW HANA.
    - BW 701 was generated Export Data Sources ( From cube to cube, from DSO to DSO) and we used of data sources to migrate historical data.
    - Then BW on Hana is a new installation, but only the historical data were extracted from the BW 701 (no Hana)
    - Export Data source generated on DSO data is ok, but the Data source of export from a cube, data is not ok. After data load this historical data, when I make a query in Query Designer and I use between in "Filters and Restrictions" no returned values, but if I include the value and not use the between the values are displayed, but the problem occurs only in the query on the Cube.
    - Are there any recommended when historical data is extracted on the Cube by Export Data Source?

  • Master Data from BW - HANA

    Hi Experts,
    Can you please suggest how can attribute views created based upon BW Master Data Info Objects. For example 0Material .
    As we know that we views gets created in HANA automatically once the infocude or ODS is activated in BW on HANA. Similarly how can we create attribute views in HANA based upon Master Data info obejcts in BW.
    Kindly suggest.
    Thanks

    Thanks.
    Understand that we can import master data tables into HANA Studi but how about data but i dont think that data gets loaded via this only the table definitions gets imported from meta data repository in BW.
    Can you please provide some more details.

  • How to load Matrix report data into basic table data using ODI

    Hi,
    How to load Matrix report data into basic table data using oracle Data Integrator ?
    Requirement Description:
    Following is the matrix report data:
    JOB                       DEPT10                DEPT20 
    ANALYST                                           6000
    CLERK                   1300                     1900 Need to convert it into below format:
    JOB                             Dept                        Salary
    ANALYST                  DEPT10     
    ANALYST                  DEPT20                     6000
    CLERK                       DEPT10                    1300
    CLERK                       DEPT20                    1900
        Thanks for your help in advance. Let me know if any further explanation is required.

    Your list seems to be a little restrictive, you can do a lot more with ODI procedures.
    If you create new procedure, and add a step. In the 'command on source' tab set you technology and schema as per your source database. Use the unpivot functionality as described in the link, please, rather than using 'SELECT *' use the appropriate column names and alias them for eg:
    SELECT job as job,
    deptsal as deptsal,
    saldesc as saledesc
    FROM pivoted_data
    UNPIVOT (
    deptsal --<-- unpivot_clause
    FOR saldesc --<-- unpivot_for_clause
    IN (d10_sal, d20_sal, d30_sal, d40_sal) --<-- unpivot_in_clause
    Then in your 'command on target' tab set the technology and schema to your target db, then put your INSERT statement for eg:
    INSERT INTO job_sales
    (job,
    deptsal,
    saledesc
    VALUES
    :job,
    :deptsal,
    :saledesc
    Therefore you are using bind variables from source to load data into target.
    Obviously if the source and target table are in the same database, then you can have it all in one statement in the 'command on target' as
    INSERT INTO job_sales
    (job,
    deptsal,
    saledesc
    SELECT job as job,
    deptsal as deptsal,
    saldesc as saledesc
    FROM pivoted_data
    UNPIVOT (
    deptsal --<-- unpivot_clause
    FOR saldesc --<-- unpivot_for_clause
    IN (d10_sal, d20_sal, d30_sal, d40_sal) --<-- unpivot_in_clause
    also set the log counter as 'Insert' on the tab where your INSERT statement is, so you know how many rows you insert into the table.
    Hope this helps.
    BUT remember that this feature only came out in Oracle 11g.

  • Data Refresh Error: We cannot locate a server to load the workbook Data Model.

    Hello,
    Recently I have developed a PowerBI Report using Excel Workbook with external data source. It refreshes successfully for some days and then starts throwing error on every scheduled refresh as "We cannot locate a server to load the workbook Data
    Model." 
    The workaround solution is to restart the application server which has SQL Server Power Pivot Services Installed which will again work for some more days and again start throwing error.
    So, I am looking for permanent fix for it.
    Following are additional details about our SP Farm:
    No. of front end servers: 2, Application servers:2, DB servers 2 with windows fail over cluster.
    PowerPivot Server is installed and configured from App 2 server.
    Error Log from event viewer:
    Unable to load custom data source provider type: Microsoft.PerformancePoint.Scorecards.DataSourceProviders.AdomdDataSourceProvider, Microsoft.PerformancePoint.Scorecards.DataSourceProviders.Standard, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c
    System.IO.FileNotFoundException: Could not load file or assembly 'Microsoft.AnalysisServices.AdomdClient, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91' or one of its dependencies. The system cannot find the file specified.
    File name: 'Microsoft.AnalysisServices.AdomdClient, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91'
       at System.RuntimeTypeHandle.GetTypeByName(String name, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMarkHandle stackMark, IntPtr pPrivHostBinder, Boolean loadTypeFromPartialName, ObjectHandleOnStack type)
       at System.RuntimeTypeHandle.GetTypeByName(String name, Boolean throwOnError, Boolean ignoreCase, Boolean reflectionOnly, StackCrawlMark& stackMark, IntPtr pPrivHostBinder, Boolean loadTypeFromPartialName)
       at System.Type.GetType(String typeName, Boolean throwOnError)
       at Microsoft.PerformancePoint.Scorecards.Server.PmServer.InitializeCustomDataSourceProviders()
    PerformancePoint Services error code 10107.
    Thanks,
    Ibrahim

    Hi ibrahim,
    Please try to do the following steps:
    1. Install "1033\x64\SQLSERVER2008_ASADOMD10.msi" from
    http://www.microsoft.com/en-us/download/details.aspx?id=26728
    2. In the Application Management section of the Central Administration home page, click Manage service applications.
    On the Manage Service Applications page, click the Excel Services service application that you want to configure.
    On the Manage Excel Services page, click Data Model.Click Add Server.
    In the Server Name box, type the name of the Analysis Services instance that you want to add.
    3. Check the thread below:
    https://social.technet.microsoft.com/Forums/en-US/ecc18319-88d8-4dd0-bafd-fa0d2edceffb/external-data-refresh-failed-we-cannot-locate-a-server-to-load-the-workbook-data-model?forum=sharepointadmin
    More information:
    https://timpanariuovidiu.wordpress.com/2013/02/14/71/
    https://support.microsoft.com/kb/2769345?wa=wsignin1.0
    Thanks,
    Dennis Guo
    TechNet Community Support
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact
    [email protected]
    Dennis Guo
    TechNet Community Support

  • PowerPivot Data Refresh Fails - We cannot locate a server to load the workbook Data Model

    Hello,
    I have installed sql01\PowerPivot instance of Analysis services on my existing SQL Server and configured it per Configure Power Pivot for SharePoint 2013
    http://technet.microsoft.com/en-us/library/dn456880(v=office.15).aspx . I did not install a second Database Engine on the server.
    I created an Excel 2013 workbook using the Excel 2013 PowerPivot add-in and saved it to my SharePoint BI Center Document library.
    When I try to refresh my workbook, I get 2 errors in the SharePoint log.
    http://support.microsoft.com/kb/2756665?wa=wsignin1.0 indicates the sproc is deprecated, but that does not help me fix the issue.
    Log errors:
    Following error occured while trying to execute a sql query: System.Data.SqlClient.SqlException (0x80131904):
    Could not find stored procedure 'DataRefresh.GetSchedule'.
    EXCEPTION: NoAvailableStreamingServerException: We cannot locate a server to load the workbook Data Model.
    macrel

    Hi Marcel,
    Based on my research, it seems that you have Microsoft SQL Server 2012 PowerPivot for SharePoint 2013 add-in installed on a computer. When you perform a data refresh operation on some PowerPivot workbooks in Microsoft SharePoint 2013, you receive two error
    message.
    As to the first error, this issue occurs because the Custom Properties on the Excel workbook are not correctly cleared before you set a new serialized schedule. This causes an extra custom property to remain after you set the new schedule. Therefore, the
    new schedule fails. To fix this issue, please
    install Cumulative Update 9 for SQL Server 2012 SP1.
    As to the second error, please
    running a repair using PowerPivot configuration wizard for SharePoint 2013. For more information about Troubleshooting PowerPivot Data Refresh, please see: 
    http://social.technet.microsoft.com/wiki/contents/articles/troubleshooting-powerpivot-data-refresh.aspx
    Thanks,
    Katherine Xiong
    Katherine Xiong
    TechNet Community Support

  • XEM - Unable to load the initial data or the variances(delta) data into sys

    I am installing xEM 2.0 SP 10 (SAP xApp Emissions Management) in a windows environment with SQL 5000.  I installed xEM on NW 2004, usage types AS Java and EP 6.
    I am attempting to load the initial data or the variances (delta) data into the system.  Instruction is on page 15 in the install guide.
    I am supposed to enter the following in the command line:
    java -Djava.ext.dirs=. -jar SAPtrans.jar import logfile=import.log datafile=init.dat connectstring=[JDBC Driver];[JDBCUrl];[User];[Password]
    Example command for import into SQL Server:
    java -Djava.ext.dirs=. -jar SAPtrans.jar import logfile=import.log datafile=init.dat connectstring=com.ddtek.jdbc.sqlserver.SQLServerDriver; jdbc:datadirect:sqlserver://vma03:1433;SAPC11DB;password
    The customer I am with is running the xEM database on a different instance.  This is where I run into a problem.  I am not sure how to specify the instance in the script.  This is what  I have attempted so far:
    C:\>cd temp\load
    C:\Temp\load>java -Djava.ext.dirs=. -jar SAPtrans.jar import logfile=import.log datafile=init.dat connectstring=com.ddte
    k.jdbc.sqlserver.SQLServerDriver;jdbc:datadirect:sqlserver://PRODSQL43:SQL3:1534;SAPPEMDB;password
    java.lang.Exception: ERROR: Cannot connect to the database:ERROR: Connect failed to database jdbc:datadirect:sqlserver:/
    /PRODSQL43:SQL3:1534 as user SAPPEMDB (java.sql.SQLException/[DataDirect][SQLServer JDBC Driver]Unable to connect.  Inva
    lid URL.): [DataDirect][SQLServer JDBC Driver]Unable to connect.  Invalid URL.
            at com.sap.sdm.util.dbaccess.DBTask.dbImport(DBTask.java:356)
            at com.sap.sdm.util.dbaccess.SapTransTask.perform_import(SapTransTask.java:293)
            at com.sap.sdm.util.dbaccess.SapTransTask.execute(SapTransTask.java:51)
            at com.sap.sdm.util.dbaccess.SapTrans.main(SapTrans.java:21)
    import aborted with java.lang.Exception: ERROR: Cannot connect to the database:ERROR: Connect failed to database jdbc:da
    tadirect:sqlserver://PRODSQL43:SQL3:1534 as user SAPPEMDB (java.sql.SQLException/[DataDirect][SQLServer JDBC Driver]Unab
    le to connect.  Invalid URL.): [DataDirect][SQLServer JDBC Driver]Unable to connect.  Invalid URL.
    C:\Temp\load>java -Djava.ext.dirs=. -jar SAPtrans.jar import logfile=import.log datafile=init.dat connectstring=com.ddte
    k.jdbc.sqlserver.SQLServerDriver;jdbc:datadirect:sqlserver://PRODSQL43;SQL3:1534;SAPPEMDB;password
    java.lang.Exception: ERROR: Cannot connect to the database:ERROR: Connect failed to database jdbc:datadirect:sqlserver:/
    /PRODSQL43 as user SQL3:1534 (java.sql.SQLException/[DataDirect][SQLServer JDBC Driver]Error establishing socket. Connec
    tion refused: connect): [DataDirect][SQLServer JDBC Driver]Error establishing socket. Connection refused: connect
            at com.sap.sdm.util.dbaccess.DBTask.dbImport(DBTask.java:356)
            at com.sap.sdm.util.dbaccess.SapTransTask.perform_import(SapTransTask.java:293)
            at com.sap.sdm.util.dbaccess.SapTransTask.execute(SapTransTask.java:51)
            at com.sap.sdm.util.dbaccess.SapTrans.main(SapTrans.java:21)
    import aborted with java.lang.Exception: ERROR: Cannot connect to the database:ERROR: Connect failed to database jdbc:da
    tadirect:sqlserver://PRODSQL43 as user SQL3:1534 (java.sql.SQLException/[DataDirect][SQLServer JDBC Driver]Error establi
    shing socket. Connection refused: connect): [DataDirect][SQLServer JDBC Driver]Error establishing socket. Connection ref
    used: connect
    C:\Temp\load>java -Djava.ext.dirs=. -jar SAPtrans.jar import logfile=import.log datafile=init.dat connectstring=com.ddte
    k.jdbc.sqlserver.SQLServerDriver;jdbc:datadirect:sqlserver://PRODSQL43:1534;SQL3;SAPPEMDB;password
    java.lang.Exception: ERROR: Cannot connect to the database:ERROR: Connect failed to database jdbc:datadirect:sqlserver:/
    /PRODSQL43:1534 as user SQL3 (java.sql.SQLException/[DataDirect][SQLServer JDBC Driver][SQLServer]Login failed for user
    'SQL3'.): [DataDirect][SQLServer JDBC Driver][SQLServer]Login failed for user 'SQL3'.
            at com.sap.sdm.util.dbaccess.DBTask.dbImport(DBTask.java:356)
            at com.sap.sdm.util.dbaccess.SapTransTask.perform_import(SapTransTask.java:293)
            at com.sap.sdm.util.dbaccess.SapTransTask.execute(SapTransTask.java:51)
            at com.sap.sdm.util.dbaccess.SapTrans.main(SapTrans.java:21)
    import aborted with java.lang.Exception: ERROR: Cannot connect to the database:ERROR: Connect failed to database jdbc:da
    tadirect:sqlserver://PRODSQL43:1534 as user SQL3 (java.sql.SQLException/[DataDirect][SQLServer JDBC Driver][SQLServer]Lo
    gin failed for user 'SQL3'.): [DataDirect][SQLServer JDBC Driver][SQLServer]Login failed for user 'SQL3'.
    C:\Temp\load>java -Djava.ext.dirs=. -jar SAPtrans.jar import logfile=import.log datafile=init.dat connectstring=com.ddte
    k.jdbc.sqlserver.SQLServerDriver;jdbc:datadirect:sqlserver://PRODSQL43:1534:SQL3;SAPPEMDB;password
    java.lang.Exception: ERROR: Cannot connect to the database:ERROR: Connect failed to database jdbc:datadirect:sqlserver:/
    /PRODSQL43:1534:SQL3 as user SAPPEMDB (java.sql.SQLException/[DataDirect][SQLServer JDBC Driver]Unable to connect.  Inva
    lid URL.): [DataDirect][SQLServer JDBC Driver]Unable to connect.  Invalid URL.
            at com.sap.sdm.util.dbaccess.DBTask.dbImport(DBTask.java:356)
            at com.sap.sdm.util.dbaccess.SapTransTask.perform_import(SapTransTask.java:293)
            at com.sap.sdm.util.dbaccess.SapTransTask.execute(SapTransTask.java:51)
            at com.sap.sdm.util.dbaccess.SapTrans.main(SapTrans.java:21)
    import aborted with java.lang.Exception: ERROR: Cannot connect to the database:ERROR: Connect failed to database jdbc:da
    tadirect:sqlserver://PRODSQL43:1534:SQL3 as user SAPPEMDB (java.sql.SQLException/[DataDirect][SQLServer JDBC Driver]Unab
    le to connect.  Invalid URL.): [DataDirect][SQLServer JDBC Driver]Unable to connect.  Invalid URL.
    C:\Temp\load>
    My last attempt was a command using colons and semicolons with the following results.  The closest (there was a significant delay before the error or failure) appears to have been //PRODSQL43;SQL3:1534; (second attempt).  The error listed from this attempt is   "Error establishing socket. Connection refused: connect".
    I also checked the default database that user SAPPEMDB has in place and it is assign the correct database.
    Please help.
    Message was edited by: Owner
            Mike Smayrabunya

    Hey,
    It looks like one of the following:
    1. The DB is down,
    2. The user SAPPEMDB does not have the right authorization.
    3. The password of the user SAPPEMDB is not password
    4. The syntax is incorrect
    in order to find what is the problem,
    please:
    1. Login in the the DB PRODSQL43:1534 with the user "SAPPEMDB" and the password "password",
    this will eliminate the options 1 - DB down, 2 -SAPPEMDB does not have authorization and 3 - password of the user SAPPEMDB is not password.
    2. If the login failed, than please run sql trace with security elements (in the client there is a tool called "SQL Profiler"
    3. If the login is correct, than you check the syntax of the command:
    "java -Djava.ext.dirs=. -jar SAPtrans.jar import logfile=import.log datafile=init.dat connectstring=com.ddtek.jdbc.sqlserver.SQLServerDriver; jdbc:datadirect:sqlserver://vma03:1433;SAPC11DB;password"
    According to the error message "Error establishing socket. Connection refused"
    it looks like The DB is down or syntax is incorrect.

  • External Data Refresh Failed. We cannot locate a server to load the workbook Data Model. ThisWorkBookDataModel

    Hi All,
    I have been trying to fix this for days now. I have tried solutions in many articles but to no avail. So while the error message is something you may have seen may times, I just can't find a solution in my case.
    This is the error:
    And in text just in case the image isn't viewable:
    "External Data Refresh Failed. We cannot locate a server to load the workbook Data Model. We were unable to refresh one or more data connections in this workbook. The following connections failed to refresh: ThisWorkBookDataModel."
    What is worse is I have checked the ULS (SharePoint Trace Logs), the Event Viewer Logs and the OWA Logs and I cannot find a specific error that would help pin point the problem.
    Excel Workbook
    So what am I doing? I have an Excel 2013 workbook and I create an "SQL Server" connection to the AdventureWorksDW database, add a pivot table and a pivot chart, test in in Excel and all works fine.
    I save the Excel workbook to SharePoint 2013 and then select "Data" then "Refresh All Connections" and then I get the error in the picture above.
    Even more puzzling is I have another Excel workbook that also has pivot tables and pivot charts in the AdventureWorksDW2012Multidimensional cube database in "SQL Analysis Services" and this works fine. Hmmm.
    My Environment
    My environment is Windows 2008 R2 Server, SharePoint 2013 with the April Service Pack1 and a separate server with OWA2013 SP1. It has an SQL Server 2008 R2 database which has been upgraded to SQL Server 2012.
    Data Model Settings
    In Excel Services this is set to my server name which is "server-name". As I do not have instances all I can enter is the server name. As this works everywhere else including the workbook outside of SharePoint I do not think this is the problem.
    But I could be wrong.
    Unattended Account
    I have set this up for the PowerPivot Services App and Excel Services App.
    ODC Connections in Excel
    I have tried all 3 authentication modes, Windows, Secure Store ID and "None" which is the unattended account. I have not tried the other connection types, should I?
    Not in WOPI
    I am not in WOPI mode.
    AD Accounts
    I have added permissions in the SharePoint Services and SQL Server, and as they work in Excel outside of SharePoint, I do not think it is a permissions issue. I could be wrong of course, but the problem is in one of SharePoint, OWA, AD,
    SQL Server, Excel, and Windows Server.
    Isolate the Error
    Below is a list of errors I think are relevant but they do not tell me much. The SharePoint logs are not really giving me an error that tells me what to do and where to do it, or even why it cannot refresh, (perhaps not noticeable by the untrained eye).
    Problem with SQL Server Not Analysis Services
    So my cube database in analysis services works fine in SharePoint/OWA but not the databases in sql server. This is my best clue but I have no idea what it means. Why would it work with an Analysis Services connection but not an "SQL Server" connection?
    It Works Outside of SharePoint
    If I run the excel worksheet outside of SharePoint all works fine. When inside OWA this is where the refresh error occurs.
    Errors from Event Viewer on SharePoint Server using ULS Viewer
    "Failed to create an external connection or execute a query. Provider message: There are no servers available or actively being initialized., ConnectionName: , Workbook:"
    "Refresh failed for 'ThisWorkbookDataModel' in the workbook 'http://server...'. [Session: 1.V22.26itT0lx8piNFeqtuGVhN214.5.en-US5.en-US36.98c0e158-9113-46e9-850e-edda81d9ed1c1.A1.N User: 0#.w|ad\testuser1]"
    And an error in the ULS under the "Data Model" category:
    "--> Check Deployment Mode (server-name): Fail (Expected: SharePoint, Actual: Multidimensional)."
    This last error, as it turned out, defined the problem concisely, although I was yet to work out what it meant in some detail.

    I finally solved this myself (or should I say with the help of several key articles).
    The refresh did not work because the database was not in "SharePoint Mode". Yes, SQL Server has modes, 3 of them in fact.
    If you installed SharePoint to the default SQL instance which would be called <servername> then you cannot use this default instance for Excel 2013 workbooks in OWA 2013 because the refresh only works if the database is in SharePoint mode.
    So what are these 3 modes? The Deployment Mode property in the msmdsrv.ini file has them as:
    0 = Multidimensional mode (the default whenever you install SQL Server normally)
    1 = PowerPivot for SharePoint mode
    2 = Tabular mode mode
    How do you know what mode you are in? That's easy, open SQL Studio Manager and connect to all your SQL database engine instances (ignore Analysis Services or SSRS as they are not database engines). If you only have the default instance then that is almost
    definately in Multidimensional mode which is the default and what SharePoint installs its databases to.
    You must have an instance called <servername>\POWERPIVOT. This instance is the "sharepoint mode" needed, and the default instance name when you install an SQL instance in this mode.
    If you don't see <servername>\POWERPIVOT in SQL server then you are not in "sharepoint mode". It is more accurate to say, you do not have an instance that is in sharepoint mode. This is because you cannot simply switch modes on an SQL server.
    You have to install a new instance in the required mode, thats the only way.
    That's easy enough. Load up the SQL Server setup CD and run setup. Install a brand new instance and select "SQL Server PowerPivot for SharePoint" when you get there in the wizard.
    Now you will have the default instance that stores all the SharePoint databases and that is in mode 0, and a new instance called <servername>\POWERPIVOT that is in mode 1. The "<servername>\POWERPIVOT" instance connection is what you
    will use for Excel 2013 when rendering in OWA 2013.
    You also need to ensure OWA 2013 is not in WOPI mode for Excel worksheets. See the last link below for more information about WOPI.
    Next you should go to the Excel Service App in CA and click Data Model Settings and add the <servername>\POWERPIVOT instance.
    Then you have to either turn off the firewall on the SQL server machine, or create an inbound rule on the Windows firewall to open the TCP port for the <servername>\POWERPIVOT instance:
    1. Start Task Manager and then click Services to get the PID of the MSOLAP$InstanceName.
    2. Run netstat –ao –p TCP from the command line to view the TCP port information for that PID.
    Finally, you can now create Excel 2013 workbooks that run in OWA without refresh errors, as long as you are connecting to the <servername>\POWERPIVOT instance. Hooray.
    REFERENCES
    Look for the string "There are no servers available or actively being initialized" in this article:
    http://blogs.msdn.com/b/analysisservices/archive/2012/08/02/verifying-the-excel-services-configuration-for-powerpivot-in-sharepoint-2013.aspx
    Determine the server mode:
    http://msdn.microsoft.com/en-au/library/gg471594(v=sql.110).aspx
    Install the SharePoint PowerPivot instance (aka SharePoint mode)
    http://msdn.microsoft.com/en-au/library/eec38696-5e26-46fa-bc83-aa776f470ce8(v=sql.110)
    Open the port for the new SQL instance:
    http://msdn.microsoft.com/en-us/library/ms174937(v=sql.110).aspx
    Turn Off WOPI for Excel OWA
    http://blogs.technet.com/b/excel_services__powerpivot_for_sharepoint_support_blog/archive/2013/01/31/powerpivot-for-sharepoint-browser-refresh-fails-data-refresh-not-supported-in-office-web-apps.aspx

  • How can I convert a .dat to a .zip on a Mac?

    Hi -
    I friend wanted me to update his company's website which runs on a MS Exchange Server/Sharepoint set up.  He zipped all the html, jpeg, and assorted text files and attached that zip file to an email.  When I received it on my Mac it had been converted to a ".dat" file.  He tells me that this is the result of sending a zipped file over the MS Exchange Server mail system to a Mac. (Whatever happened to cross-platform?)
    Is there a Mac utility I can use to convert this dat file back to a zip file that I then can open on my Mac?
    TIA
    Josh

    Hello:
    This article addresses your issue:
    http://docs.info.apple.com/article.html?path=Mail/10.7/en/mlhlp1141.html
    Barry

  • How can i convert the date from M to MM ?

    Dear Guru ,
    I need to upload my list to SAP table , and in the list , we are using YYYY/M/D format ( Eg. 2010/5/20 , 2010/10/1 ) .
    And now i want to convert all date format to YYYY/MM/DD , Is it possibile to do that ?
    Here is my code , but it doesn't work . It returned "2009//3//5" format .
    data: ld_date_int type datum.
    data : test(10) type c.
    test = '2009/3/5' .
    ld_date_int = test .
    WRITE : SY-SUBRC , LD_DATE_int .
    Does SAP provide a standard function can convert the date format ?
    Thanks .
    Best Regards,
    Carlos Zhang

    Hi Dear
    You can try in this way :::
    data: ld_date_int type string.
    DATA : ld_string TYPE string.
    data : test(10) type c,
           ld_res1(4) TYPE c,
           ld_res2(2) TYPE c,
           ld_res3(2) TYPE c.
    DATA : ll_res2 TYPE i,
           ll_res3 TYPE i.
    test = '2009/03/5' .
    ld_date_int = test .
    ld_string = strlen( ld_date_int ).
    CASE ld_string.
      WHEN 10.
       WRITE : SY-SUBRC , LD_DATE_int.
      WHEN OTHERS.
        SPLIT ld_date_int at '/' INTO ld_res1 ld_res2 ld_res3 in CHARACTER MODE.
        ll_res2 = strlen( ld_res2 ).
        ll_res3 = strlen( ld_res3 ).
        IF NOT ll_res2 eq 2 and not ll_res3 eq 2.
          CONCATENATE: '0' ld_res2 INTO ld_res2.
          CONCATENATE: '0' ld_res3 INTO ld_res3.
          CONCATENATE ld_res1 '/' ld_res2 '/' ld_res3 INTO ld_date_int.
          WRITE : SY-SUBRC , LD_DATE_int.
        ENDIF.
        IF ll_res2 eq 2 and not ll_res3 eq 2.
          CONCATENATE '0' ld_res3 INTO ld_res3.
          CONCATENATE ld_res1 '/' ld_res2 '/' ld_res3 INTO ld_date_int.
          WRITE : SY-SUBRC , LD_DATE_int.
        ENDIF.
        IF NOT ll_res2 eq 2 and ll_res3 eq 2.
           CONCATENATE: '0' ld_res2 INTO ld_res2.
            CONCATENATE ld_res1 '/' ld_res2 '/' ld_res3 INTO ld_date_int.
             WRITE : SY-SUBRC , LD_DATE_int.
        ENDIF.
    ENDCASE.

Maybe you are looking for