What is SAP HANA DXC ???

Hi SAP HANA Experts,
Can anyone you tell me what is SAP HANA DXC ???
Is it a Add-On Or inbuilt software component available with SAP HANA DB Package Itself
Do we need to download the SAP HANA DXC individually ???
What is the path of the DXC Software component in SMP to view it

Hello guy,
The SAP HANA Direct Extractor Connection (DXC) is a means for providing out-of-the-box foundational data models to SAP HANA, which are based on SAP Business Suite entities. (No add-on need to be installed). DXC is also a data acquisition method for SAP HANA. With DXC, SAP Business Content DataSource Extractors are available to deliver data directly to SAP HANA
Refer to:
http://help.sap.com/hana/sap_hana_direct_extractor_connection_implementation_guide_en.pdf
1665602 - Setup & Config: SAP HANA Direct Extractor Connection (DXC)
Regards,
Ning Tong

Similar Messages

  • CPU Work load optimization in SAP HANA

    Hi All,
    In virtual SAP HANA , there is an option to assign/optimize the vCPU to corresponds to Physical CPU as mentioned in the document below,
    .http://www.vmware.com/files/pdf/SAP_HANA_on_vmware_vSphere_best_practices_guide.pdf
    But,in Physical SAP HANA Systems, are the below mentioned features available,
    Can we assign dedicated CPU cores manually to a particular user/users ?
    Or, Is there a way to reserve  certain CPU cores for particular Application/Schema Threads/Sessions?
    Thanks for your help!!
    -Gayathri
    Message was edited by: Julie Blaufuss

    Nope, there is no such cpu-core based allocation option available.
    You can limit how many worker threads will be created, in case you need to balance the CPU usage on in a multi instance setup.
    However, you don't have any control on the actual amount of CPU resource that any SAP HANA instance, let alone DB user or query has.
    Comparing this with what you can do in vSphere is not suitable, as we are looking at a different level of abstraction here.
    To SAP HANA the machine that vSphere emulates will have x cores and SAP HANA will use these x cores - all of them.
    It's important not to forget that you have an additional layer of indirection here and things like CPU core binding can easily have negative side effects.
    For the SAP HANA users it would be more interesting to have a workload management within SAP HANA that would allow to manage different requirements on responsiveness and resource usage (it's not just CPU...). And that is what the SAP HANA development is working on.
    Maybe we're lucky and see some features in this direction later this year.

  • SAP HANA Rapid deployment sloution

    Hi,
    What is SAP HANA Rapid deployment sloution?
    Priya

    Hi Priya,
    RDS Packages are the predeveloped contents for HANA which the customer can import into their system and make use of these models.
    Basically RDS will contain set of predefined Attribute,Analytic and Calclation Views for different business modules like SD(Sales & Distribution),MM(Material Mgmt),LE(Logistics) etc.
    The customers have to make sure that the underlying tables used for these views are existing in their HANA system so that they can simply download these RDS Contents and can create reports out of them.
    You will have the following Content structure in HANA Studio after importing the RDS package.
    sap->
        ecc->
           fnd,mm,sd,le
    In short,RDS packages are nothing but predeveloped contents for different business modules.
    Rgds,
    Murali

  • What are the different types of analytic techniques possible in SAP HANA with the examples?

    Hello Gurus,
    Please provide the information on what are the different types of Analytic techniques possible in SAP HANA with examples.
    I would want to know in category of Predictive analysis ,Advance statistical analysis ,segmentation analysis ,data reduction techniques and forecast techniques
    Which Analytic techniques are possible in SAP HANA?
    Thanks and Regards
    Sushma C Narasimhamurthy

    Hi Sushma,
    You can download the user guide here:
    http://www.google.com.au/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&ved=0CFcQFjAB&url=http%3A%2F%2Fhelp.sap.com%2Fbusinessobject%2Fproduct_guides%2FSBOpa10%2Fen%2Fpa_user_en.pdf&ei=NMgHUOOtIcSziQfqupyeBA&usg=AFQjCNG10eovyZvNOJneT-l6J7fk0KMQ1Q&sig2=l56CSxtyr_heE1WlhfTdZQ
    It has a list of the algorithms, which are pretty disappointing, I must say. No Random Forests? No ensembling methods? Given that it's using R algorithms, I must say this is a missed opportunity to beat products like SPSS and SAS at their own game. If SAP were to include this functionality, they would be the only BI vendor capable of having a serious predictive tool integrated with the rest of the platform.... but this looks pretty weak.
    I can only hope a later release will remedy this - or maybe the SDK will allow me to create what I need.
    As things stand, I could built a random forest using this tool, but I would have to use a lot of hardcoded SQL to make it happen. And if I wanted to go down that road, I could use the algorithms that come with the Microsoft/Oracle software.
    Please let me be wrong........

  • What is a dynamic join in SAP HANA?

    What is a dynamic join in SAP HANA and how it works? Please explain me with the example , how to use it?
    Message was edited by: Tom Flanagan

    Hi Sree,
    In very simple and basic terms:
    If you have table A and B with columns C1, C2, and C3 used in multi-column join (with Dynamic join set to true) from both the tables, then depending upon which columns you select in the query, ONLY those columns will be used in the join.
    For ex. if you select C1, C2 in the select statement, then the join will happen only on C1 and C2, C3 will not be used in the join criteria, even if the join definition involves all the 3 columns.
    Regards,
    Ravi

  • Why SAP HANA ? what made it difference from Data ware housing concept ?

    Hi,
    Greetings Guys !! I am wondering and kind of lost with giant SAP newly introduced database SAP HANA. I have been informed that HANA is developed for fast query processing mainly.The older method of query processing involves lot of time ,i.e . the request should go to database and will retrieve information . If HANA have shown IMDB ( In- Memory database ) technology , Dataware house concept also solves waiting time of query processing .
    Could some one explain me why HANA is introduced if Dateware house concept was already born ??
    Message was edited by: Tom Flanagan

    Hi Venkatesh Garlapat,
    SAP HANA is/was not only about reinventing the DW concepts again, but its a lot more than that.
    Probably Google would have helped you, but let me point you to the right content which can clarify your doubts: Please refer:
    http://www.saphana.com/welcome
    About HANA | SAP HANA
    Features | SAP HANA
    SAP HANA Outperforms | SAP HANA
    SAP HANA Cloud Platform | SAP HANA
    Data Compatibility | SAP HANA
    I hope above links helps.
    Happy Learning.
    Regards
    Kumar

  • Open HUB ( SAP BW ) to SAP HANA through DB Connection data loading , Delete data from table option is not working Please help any one from this forum

    Issue:
    I have SAP BW system and SAP HANA System
    SAP BW to SAP HANA connecting through a DB Connection (named HANA)
    Whenever I created any Open Hub as Destination like DB Table with the help of DB Connection, table will be created at HANA Schema level ( L_F50800_D )
    Executed the Open Hub service without checking DELETING Data from table option
    Data loaded with 16 Records from BW to HANA same
    Second time again executed from BW to HANA now 32 records came ( it is going to append )
    Executed the Open Hub service with checking DELETING Data from table option
    Now am getting short Dump DBIF_RSQL_TABLE_KNOWN getting
    If checking in SAP BW system tio SAP BW system it is working fine ..
    will this option supports through DB Connection or not ?
    Please follow the attachemnet along with this discussion and help me to resolve how ?
    From
    Santhosh Kumar

    Hi Ramanjaneyulu ,
    First of all thanks for the reply ,
    Here the issue is At OH level ( Definition Level - DESTINATION TAB and FIELD DEFINITION )
    in that there is check box i have selected already that is what my issue even though selected also
    not performing the deletion from target level .
    SAP BW - to SAP HANA via DBC connection
    1. first time from BW suppose 16 records - Dtp Executed -loaded up to HANA - 16 same
    2. second time again executed from BW - now hana side appaended means 16+16 = 32
    3. so that i used to select the check box at OH level like Deleting data from table
    4. Now excuted the DTP it throws an Short Dump - DBIF_RSQL_TABLE_KNOWN
    Now please tell me how to resolve this ? will this option is applicable for HANA mean to say like , deleting data from table option ...
    Thanks
    Santhosh Kumar

  • Currency conversion error in SAP HANA

    Hi,
    I am new to SAP HANA and learning to create information views in HANA studio (SAP HANA SP6 on Cloudshare, HANA studio 1.0.68). I am trying to create a simple analytic view (on purchaseOrderItem table in SAP_HANA_EPM_DEMO sample database) to have GrossAmount converted to EUR.
    I added a calculated column as follows:
    When i click on "OK", i get error -
    The check box “Calculate before aggregation” has been unchecked, because the definition of the calculated column contains measures with currency conversion, restricted measures or operands with input parameters. For such a calculated column the calculation is always done after the aggregation."
    and checkbox "calculate before aggregation" get unchecked. See screenshot below:
    Please suggest what could be reason? Thanks in advance.
    Regards,
    Amit

    Hi Amit,
    If you uncheck the "Calculate before aggregation" checkbox and activate the view, you will see in the generated log that a Calc scenario is created. (a view with /olap wrapper). Due to the calc scenario, the aggregation is defined as the default behavior for the KFs and hence the calculation cannot be done before aggregation.
    By the way, I did not understand why do you need calculate before aggregation for a KF which is just a copy of another KF. If you need Gross amount in Local currency and EUR, then just perform the currency conversion without "Calculate before aggregation" checkbox. It will work.
    Regards,
    Ravi

  • SAP HANA modelling Standalone

    Hello Experts,
    We are in the process of HANA Standalone implementation and design studio as reporting tool. When I am modeling, I did not figure out answers to some of the below questions .Below are the questions. Experts, please help.
    Best way of modeling: The SAP HANA LIVE is completely built on calculation view; there are no Attribute and Analytical views. I have got different answer why there is only Calculation view and there are no Alaytic view and Attribute views. We are in SP7 latest version. This is a brand new HANA in top of non-SAP (DB2 source).  What is the best way to model this scenario, meaning, can we model everything in the Calculation view’s like SAP HANA live or do you suggest using the standard attribute, analytical and calculation views to do the data model. Is SAP moving away from AV & AT to only calculation Views to simply the modeling approach?
    Reporting: We are using the design studio as front end tool. Just for example, if we assume that we are
    Using the BW, we bring all the data in to BW from different sources, build the cubes and use the bex query. Here in bex query we will be using the restricted key figures, calculated key figures calculations etc. From the reporting wise, we have the same requirements, calculations, RKF, CKF,Sum, Avg etc. if we are Using the design studio on top of standalone HANA, where do I need to implement all these calculations? Is it in different views?  (From reporting perspective, if it’s BW system, I would have done all the calculations in BEx.)
    Universe: If we are doing all the calculations in SAP HANA like RKF. CKF and other calculations , what is the point in having additional layer of universe , because the reporting compnets cam access the queries directly on views .In one of our POC , we found that the using universe affect performance.
    Real time reporting: Our overall objective is to give a real time or close to real time reporting requirements, how data services can help, meaning I can schedule the data loads every 3 or 5 min to pull the data from source. If I am using the Data services, how soon I can get the data in HANA, I know it depends on the no of records and the transformations in between the systems & network speed. Assuming that I will schele the job every 2 min and it will take another 5 min to process the Data services job , is it fair to say the my information will be available on the BOBJ tools with in 10 min from the creation of the records.
    Are there any new ETL capabilities included in SP7, I see some additional features included in SP7. Is some of the concepts discussed are still valid, because in SP7 we have star join concept.
    Thanks
    Magge

    magge kris wrote:
    Hello Experts,
    We are in the process of HANA Standalone implementation and design studio as reporting tool. When I am modeling, I did not figure out answers to some of the below questions .Below are the questions. Experts, please help.
    Best way of modeling: The SAP HANA LIVE is completely built on calculation view; there are no Attribute and Analytical views. I have got different answer why there is only Calculation view and there are no Alaytic view and Attribute views. We are in SP7 latest version. This is a brand new HANA in top of non-SAP (DB2 source).  What is the best way to model this scenario, meaning, can we model everything in the Calculation view’s like SAP HANA live or do you suggest using the standard attribute, analytical and calculation views to do the data model. Is SAP moving away from AV & AT to only calculation Views to simply the modeling approach?
    >> I haven't read any "official" guidance to move away from typical modeling approach, so I'd say stick with the usual approach- AT, then AV, then CA views. I was told that the reason for different approach with HANA Live was to simplify development for mass production of solutions.
    Reporting: We are using the design studio as front end tool. Just for example, if we assume that we are
    Using the BW, we bring all the data in to BW from different sources, build the cubes and use the bex query. Here in bex query we will be using the restricted key figures, calculated key figures calculations etc. From the reporting wise, we have the same requirements, calculations, RKF, CKF,Sum, Avg etc. if we are Using the design studio on top of standalone HANA, where do I need to implement all these calculations? Is it in different views?  (From reporting perspective, if it’s BW system, I would have done all the calculations in BEx.)
    >> I'm not a BW guy, but from a HANA perspective - implement them where they make the most sense. In some cases, this is obvious - restricted columns are only available in Analytic Views. Hard to provide more complex advice here - it depends on your scenario(s). Review your training materials, review SCN posts and you should start to develop a better idea of where to model particular requirements. (Most of the time in typical BI scenarios, requirements map nicely to straightforward modeling approaches such as Attribute/Analytic/Calculations Views. However, some situations such as slowly-changing dimensions, certain kinds of calculations (i.e. calc before aggregation with BODS as source - where calculation should be done in ETL logic) etc can be more complex. If you have specific scenarios that you're unsure about, post them here on SCN.
    Universe: If we are doing all the calculations in SAP HANA like RKF. CKF and other calculations , what is the point in having additional layer of universe , because the reporting compnets cam access the queries directly on views .In one of our POC , we found that the using universe affect performance.
    >>> Depends on what you're doing. Universe generates SQL just like front-end tools, so bad performance implies bad modeling. Generally speaking - universes *can* create more autonomous reporting architecture. But if your scenario doesn't require it - then by all means, avoid the additional layer if there's no added value.
    Real time reporting: Our overall objective is to give a real time or close to real time reporting requirements, how data services can help, meaning I can schedule the data loads every 3 or 5 min to pull the data from source. If I am using the Data services, how soon I can get the data in HANA, I know it depends on the no of records and the transformations in between the systems & network speed. Assuming that I will schele the job every 2 min and it will take another 5 min to process the Data services job , is it fair to say the my information will be available on the BOBJ tools with in 10 min from the creation of the records.
    Are there any new ETL capabilities included in SP7, I see some additional features included in SP7. Is some of the concepts discussed are still valid, because in SP7 we have star join concept.
    >>> Not exactly sure what your question here is. Your limits on BODS are the same as with any other target system - doesn't depend on HANA. The second the record(s) are committed to HANA, they are available. They may be in delta storage, but they're available. You just need to work out how often to schedule BODS - and if your jobs are taking 5 minutes to run, but you're scheduling executions every 2 minutes, you're going to run into problems...
    Thanks
    Magge

  • Issues while creating new user in SAP HANA

    Hello Team
                       When i am trying to create a new user in SAP HANA studio i can see that there has been a new field added of DATA validity where there are two options  a) Valid From and b)Valid Unitl . No matter whatever dates i give in this i got this error which creating the user . Status :- inactive
    Reason :- outside validity period . PFA screen shot attached below . Please suggest what dates should be given in  this field with some sample example .
    Regards

    Prag,
    Try this. Execute the following in a SQL window started by a userid that has been granted the USER ADMIN system privilege:
    ALTER USER BODS1 VALID FROM NOW    UNTIL FOREVER;
    You can use a date instead of FOREVER --- '2016-12-31 23:59'.
    Good luck,
    Robert

  • Process to Upgrade SAP HANA from SPS06 to SPS07 on distributed systems

    Hi Experts,
    We have a requirement to upgrade our SAP HANA system system from SPS06 to SPS07, but I have below queries on the same:
    1) Do we need to consider any Notes before applying the upgrade for pre and post steps (as per my knowledge we have to consider post upgrade note: 1962472, are there any other too?)
    2) As per my knowledge, we just need to stop SAP app systems and then backup of HANA system
    3) Also we have a distributed environment in our landscape (three multiple hosts), what could be the upgrade approach, is it same as like single host environment (as /hana/shared will be the common to all distributed hosts)
    Could anyone clarify my questions please.
    Also for your information we have BW systems on top of HANA DB
    Thanks very much in advance !
    Kind Regards,
    Arun Reddy
    Message was edited by: Tom Flanagan

    Hi John,
    Thank you for the response, our source and target levels are (Source - 1.00.69.00.385196 Target - 1.00.74.03.392810)
    I just wanted to ask you one more quick question here, can't we upgrade the above mentioned versions in one go? or we need to upgrade in two steps like, at first upgrading from 1.00.69.00.385196 to 1.00.73.00.389160  and then to target level 1.00.74.03.392810?
    Also as you said in distributed environment we can upgrade as same like single host environment, so it same like below approach?
    1) stopping the hana database
    2) navigate to media path and then execute hdblcm in gui or command line
    3) select option upgrading existing system
    4) provide the passwords and system poped-up inputs
    5) select the target levels of HANA components
    6) once upgrade done successfully, then need to upgrade hana client in all app servers
    if my above assumption is correct, then I have couple of questions like:
    1) in which host we need to login and perform the upgrade? (as we are having three hosts- HANADB1, HANADB2, HANADB3) , is it okay we can do from any of the three hosts?
    2) Also if upgrade stopped at middle due to errors, do we have option like to start the upgrade from the point where it stop? else we need to restore the lat successful backup and then again need to perform the upgrade from starting point?
    Sorry for asking many questions, i have done the upgrade in single host environment, but not on distributed. So wanted to be make sure with all points to action the activity.
    Thanks for your help and patience in advance !
    Kind Regards,
    Arun Reddy

  • Connect SAP HANA Studio to database on local server

    I've installed SAP HANA Studio, and my application is running on a local server. How could I connect to my local database from Studio?
    What are the values for Host Name, Instance Number and options for JDBC Connection, where can I find these data?

    Not related to UI Development Toolkit for HTML Developer Center.  Moved to SAP HANA Developer Center community.
    Regards, Mike (Moderator)

  • DBLINK truncation with SAP HANA db

    Hi - I have Oracle 11g installed in my Windows laptop and dblink connected to SAP's HANA database via ODBC using the HANA odbc driver. My NVARCHAR data in HANA is being truncated in half. I am working thru sqlplus. Same result in SQL developer client tool. The VARCHAR data is ok. I created three Oracle instances with the only difference being the NLS_CHARACTERSET & NLS_NCHAR_CHARACTERSET values. I have three SIDS: orcl, orulu, and orclutf8. All with the same result. My gateway settings for each are all the same. I started testing with SID orcl and once I found that out I decided to create orclu and orclutf8. In our Unix boxes, we have orcl and orclu settings and those are behaving the same (we use unixodbc.org as the mgr).
    I provided orclutf8 gateway .ora file and the orclut8 system info below.
    Symptoms/Info:
    The character set of HANA db is AL32UTF8.
    The HANA db table contains NVARCHAR and have Unicode values (eg: em dash, even Chinese char). NVARCHAR columns gets cut in half as shown in sqlplus (same in sql developer).
    For the half that do show up, the actual Unicode character shows up in sqlplus as either unprintable character or upside down question mark or a \u character. This is ok coz no abends therefore data gets process and let my customers deal with the non-converted data – it is ok with them.
    Since all SIDs are behaving the same way, I provided you information for orclutf8: initdwutf.ora, the system info, and the trace file. Of all things that SHOULD work, it is the one with the exact character set to HANA.
    I have two tables in HANA with the same number of columns and rows. Only difference is NVARCHAR versus VARCHAR.There are three columns with 3, 20, and 150 length.
    I took a Oracle trace when selecting from each table and compared them both. I pasted a picture at the bottom. The left side is the VARCHAR and right side NVARCHAR. You can see the HANA odbc driver report a truncation issue on line 209 but I do not see this error in sqlplus. I have a SAP incident open on this.
    Is there something in the Oracle side that can be tried? For example, in the trace compare picture, the VARCHAR trace shows that is doubled the data size for each column from 3, 20, and 150 to 6, 40, and 300. In the NVARCHAR it did not.
    SID: orcl
                    SELECT value$ FROM sys.props$ WHERE name = 'NLS_CHARACTERSET’;
                    WE8MSWIN1252
                    SELECT value$ FROM sys.props$ WHERE name = 'NLS_NCHAR_CHARACTERSET’;
                    AL16UTF16
    SID: orclu
                    SELECT value$ FROM sys.props$ WHERE name = 'NLS_CHARACTERSET’;
                    AL32UTF8
                   SELECT value$ FROM sys.props$ WHERE name = 'NLS_NCHAR_CHARACTERSET’;
                    AL16UTF16
    SID: orclutf8
                    SELECT value$ FROM sys.props$ WHERE name = 'NLS_CHARACTERSET’;
                    AL32UTF8
                   SELECT value$ FROM sys.props$ WHERE name = 'NLS_NCHAR_CHARACTERSET’;
                    UTF8
    initdw7utf.ora:
    # This is a sample agent init file that contains the HS parameters that are
    # needed for the Database Gateway for ODBC
    # HS init parameters
    #HS_FDS_CONNECT_INFO = <odbc data_source_name>
    HS_FDS_CONNECT_INFO = HANADW7
    HS_FDS_TRACE_LEVEL=DEBUG
    #HS_LANGUAGE=AL32UTF8
    HS_LANGUAGE=AMERICAN_AMERICA.AL32UTF8
    HS_FDS_REMOTE_DB_CHARSET=AL32UTF8
    # Environment variables required for the non-Oracle system
    #set <envvar>=<value>
    SELECT * FROM sys.props$:
    DICT.BASE       2
    DEFAULT_TEMP_TABLESPACE           TEMP
    DEFAULT_PERMANENT_TABLESPACE            USERS
    DEFAULT_EDITION       ORA$BASE
    Flashback Timestamp TimeZone            GMT
    TDE_MASTER_KEY_ID
    DST_UPGRADE_STATE            NONE
    DST_PRIMARY_TT_VERSION    11
    DST_SECONDARY_TT_VERSION          0
    DEFAULT_TBS_TYPE   SMALLFILE
    NLS_LANGUAGE          AMERICAN
    NLS_TERRITORY          AMERICA
    NLS_CURRENCY          $
    NLS_ISO_CURRENCY   AMERICA
    NLS_NUMERIC_CHARACTERS  .,
    NLS_CHARACTERSET  AL32UTF8
    NLS_CALENDAR          GREGORIAN
    NLS_DATE_FORMAT    DD-MON-RR
    NLS_DATE_LANGUAGE            AMERICAN
    NLS_SORT       BINARY
    NLS_TIME_FORMAT     HH.MI.SSXFF AM
    NLS_TIMESTAMP_FORMAT      DD-MON-RR HH.MI.SSXFF AM
    NLS_TIME_TZ_FORMAT            HH.MI.SSXFF AM TZR
    NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR
    NLS_DUAL_CURRENCY            $
    NLS_COMP      BINARY
    NLS_LENGTH_SEMANTICS       BYTE
    NLS_NCHAR_CONV_EXCP       FALSE
    NLS_NCHAR_CHARACTERSET UTF8
    NLS_RDBMS_VERSION            11.2.0.1.0
    GLOBAL_DB_NAME     ORCLUTF8
    EXPORT_VIEWS_VERSION      8
    WORKLOAD_CAPTURE_MODE           
    WORKLOAD_REPLAY_MODE  
    NO_USERID_VERIFIER_SALT   57505D68AFECC3BCECE484A1C42CC8CE
    DBTIMEZONE   00:00

    1) When I tried HS_KEEP_REMOTE_COLUMN_SIZE=LOCAL the nvarchar select statement still truncated and displayed them in sqlplus.
    For the varchar select statement, it just error'ed out in sqlplus.
    ERROR:
    ORA-28562: Heterogeneous Services data truncation error
    ORA-02063: preceding line from DEVUTF8
    no rows selected
    I commented out the HS_KEEP_REMOTE_COLUMN_SIZE=LOCAL for now.
    2) For the nvarchar select statement, I do not get an error messages via sqlplus. I get the records displayed truncated in half they should be. A native odbc error do show up in the Oracle Trace file. I think that comes from the HANA odbc driver. It is line 209 of the picture in my original thread.
    3) DESCRIBE commands output below:
    SQL> desc ESBA_DB.ZTESTSAP@DEVUTF8 - THIS IS THE NVARCHAR TABLE. The sizes match what is in HANA db.
    Name                                      Null?    Type
    MANDT                                     NOT NULL NVARCHAR2(3)
    NAME                                      NOT NULL NVARCHAR2(20)
    NAME_150                                  NOT NULL NVARCHAR2(150)
    SQL> desc PTAN.ZTESTSAP_VC@DEVUTF8 - THIS IS THE VARCHAR TABLE.The sizes do not match what is in HANA db.
    Name                                      Null?    Type
    MANDT                                              VARCHAR2(1)
    NAME                                               VARCHAR2(6)
    NAME150                                            VARCHAR2(50)
    4) Below is the gateway trace. I included from the first occurence of hgodscr and all the way to the end of it. You can see the HANA odbc driver truncation.
    Entered hgodscr, cursor id 1 at 2014/10/02-11:15:41
    Allocate hoada @ 03705518
    Entered hgopcda at 2014/10/02-11:15:41
    Column:1(M): dtype:-9 (WVARCHAR), prc/scl:3/0, nullbl:1, octet:3, sign:1, radix:0
    Exiting hgopcda, rc=0 at 2014/10/02-11:15:41
    Entered hgopcda at 2014/10/02-11:15:41
    Column:2(N): dtype:-9 (WVARCHAR), prc/scl:20/0, nullbl:1, octet:20, sign:1, radix:0
    Exiting hgopcda, rc=0 at 2014/10/02-11:15:41
    Entered hgopcda at 2014/10/02-11:15:41
    Column:3(N): dtype:-9 (WVARCHAR), prc/scl:150/0, nullbl:1, octet:150, sign:1, radix:0
    Exiting hgopcda, rc=0 at 2014/10/02-11:15:41
    hgodscr, line 910: Printing hoada @ 03705518
    MAX:3, ACTUAL:3, BRC:100, WHT=5 (SELECT_LIST)
    hoadaMOD bit-values found (0x40:TREAT_AS_NCHAR)
    DTY         NULL-OK  LEN  MAXBUFLEN   PR/SC  CST IND MOD NAME
    12 VARCHAR Y          3          3 128/  3 1000   0  40 MANDT
    12 VARCHAR Y         20         20 128/ 20 1000   0  40 NAME
    12 VARCHAR Y        150        150 128/150 1000   0  40 NAME_150
    Exiting hgodscr, rc=0 at 2014/10/02-11:15:41
    Entered hgoftch, cursor id 1 at 2014/10/02-11:15:41
    hgoftch, line 130: Printing hoada @ 03705518
    MAX:3, ACTUAL:3, BRC:100, WHT=5 (SELECT_LIST)
    hoadaMOD bit-values found (0x40:TREAT_AS_NCHAR)
    DTY         NULL-OK  LEN  MAXBUFLEN   PR/SC  CST IND MOD NAME
    12 VARCHAR Y          3          3 128/  3 1000   0  40 MANDT
    12 VARCHAR Y         20         20 128/ 20 1000   0  40 NAME
    12 VARCHAR Y        150        150 128/150 1000   0  40 NAME_150
    Performing delayed open.
    SQLBindCol: column 1, cdatatype: -8, bflsz: 6
    SQLBindCol: column 2, cdatatype: -8, bflsz: 22
    SQLBindCol: column 3, cdatatype: -8, bflsz: 152
    Entered hgopoer at 2014/10/02-11:15:41
    hgopoer, line 233: got native error 0 and sqlstate 01004; message follows...
    [SAP AG][LIBODBCHDB32 DLL] Data truncated {01004}[SAP AG][LIBODBCHDB32 DLL] Data truncated {01004}[SAP AG][LIBODBCHDB32 DLL] Data truncated {01004}[SAP AG][LIBODBCHDB32 DLL] Data truncated {01004}[SAP AG][LIBODBCHDB32 DLL] Data truncated {01004}
    Exiting hgopoer, rc=0 at 2014/10/02-11:15:41
    hgoftch, line 740: calling SQLFetch got sqlstate 01004
    SQLFetch: row: 1, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 1, column 1, bflsz: 6,  bflar: 6, (bfl: 3, mbl: 3)
    SQLFetch: row: 1, column 2, bflsz: 22, bflar: 6
    SQLFetch: row: 1, column 2, bflsz: 22,  bflar: 6, (bfl: 20, mbl: 20)
    SQLFetch: row: 1, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 1, column 3, bflsz: 152,  bflar: 0, (bfl: 150, mbl: 150)
    SQLFetch: row: 2, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 2, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 2, column 2, bflsz: 22, bflar: 12
    SQLFetch: row: 2, column 2, bflsz: 22,  bflar: 12, (bfl: 0, mbl: 20)
    SQLFetch: row: 2, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 2, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 3, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 3, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 3, column 2, bflsz: 22, bflar: 8
    SQLFetch: row: 3, column 2, bflsz: 22,  bflar: 8, (bfl: 0, mbl: 20)
    SQLFetch: row: 3, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 3, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 4, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 4, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 4, column 2, bflsz: 22, bflar: 6
    SQLFetch: row: 4, column 2, bflsz: 22,  bflar: 6, (bfl: 0, mbl: 20)
    SQLFetch: row: 4, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 4, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 5, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 5, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 5, column 2, bflsz: 22, bflar: 12
    SQLFetch: row: 5, column 2, bflsz: 22,  bflar: 12, (bfl: 0, mbl: 20)
    SQLFetch: row: 5, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 5, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 6, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 6, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 6, column 2, bflsz: 22, bflar: 8
    SQLFetch: row: 6, column 2, bflsz: 22,  bflar: 8, (bfl: 0, mbl: 20)
    SQLFetch: row: 6, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 6, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 7, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 7, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 7, column 2, bflsz: 22, bflar: 6
    SQLFetch: row: 7, column 2, bflsz: 22,  bflar: 6, (bfl: 0, mbl: 20)
    SQLFetch: row: 7, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 7, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 8, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 8, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 8, column 2, bflsz: 22, bflar: 12
    SQLFetch: row: 8, column 2, bflsz: 22,  bflar: 12, (bfl: 0, mbl: 20)
    SQLFetch: row: 8, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 8, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 9, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 9, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 9, column 2, bflsz: 22, bflar: 8
    SQLFetch: row: 9, column 2, bflsz: 22,  bflar: 8, (bfl: 0, mbl: 20)
    SQLFetch: row: 9, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 9, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 10, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 10, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 10, column 2, bflsz: 22, bflar: 6
    SQLFetch: row: 10, column 2, bflsz: 22,  bflar: 6, (bfl: 0, mbl: 20)
    SQLFetch: row: 10, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 10, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 11, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 11, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 11, column 2, bflsz: 22, bflar: 12
    SQLFetch: row: 11, column 2, bflsz: 22,  bflar: 12, (bfl: 0, mbl: 20)
    SQLFetch: row: 11, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 11, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 12, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 12, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 12, column 2, bflsz: 22, bflar: 8
    SQLFetch: row: 12, column 2, bflsz: 22,  bflar: 8, (bfl: 0, mbl: 20)
    SQLFetch: row: 12, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 12, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 13, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 13, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 13, column 2, bflsz: 22, bflar: 6
    SQLFetch: row: 13, column 2, bflsz: 22,  bflar: 6, (bfl: 0, mbl: 20)
    SQLFetch: row: 13, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 13, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 14, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 14, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 14, column 2, bflsz: 22, bflar: 12
    SQLFetch: row: 14, column 2, bflsz: 22,  bflar: 12, (bfl: 0, mbl: 20)
    SQLFetch: row: 14, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 14, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 15, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 15, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 15, column 2, bflsz: 22, bflar: 40
    SQLFetch: row: 15, column 2, bflsz: 22,  bflar: 40, (bfl: 0, mbl: 20)
    SQLFetch: row: 15, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 15, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 16, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 16, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 16, column 2, bflsz: 22, bflar: 8
    SQLFetch: row: 16, column 2, bflsz: 22,  bflar: 8, (bfl: 0, mbl: 20)
    SQLFetch: row: 16, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 16, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 17, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 17, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 17, column 2, bflsz: 22, bflar: 32
    SQLFetch: row: 17, column 2, bflsz: 22,  bflar: 32, (bfl: 0, mbl: 20)
    SQLFetch: row: 17, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 17, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 18, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 18, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 18, column 2, bflsz: 22, bflar: 40
    SQLFetch: row: 18, column 2, bflsz: 22,  bflar: 40, (bfl: 0, mbl: 20)
    SQLFetch: row: 18, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 18, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 19, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 19, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 19, column 2, bflsz: 22, bflar: 12
    SQLFetch: row: 19, column 2, bflsz: 22,  bflar: 12, (bfl: 0, mbl: 20)
    SQLFetch: row: 19, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 19, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 20, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 20, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 20, column 2, bflsz: 22, bflar: 2
    SQLFetch: row: 20, column 2, bflsz: 22,  bflar: 2, (bfl: 0, mbl: 20)
    SQLFetch: row: 20, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 20, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 21, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 21, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 21, column 2, bflsz: 22, bflar: 2
    SQLFetch: row: 21, column 2, bflsz: 22,  bflar: 2, (bfl: 0, mbl: 20)
    SQLFetch: row: 21, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 21, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 22, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 22, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 22, column 2, bflsz: 22, bflar: 6
    SQLFetch: row: 22, column 2, bflsz: 22,  bflar: 6, (bfl: 0, mbl: 20)
    SQLFetch: row: 22, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 22, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 23, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 23, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 23, column 2, bflsz: 22, bflar: 12
    SQLFetch: row: 23, column 2, bflsz: 22,  bflar: 12, (bfl: 0, mbl: 20)
    SQLFetch: row: 23, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 23, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 24, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 24, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 24, column 2, bflsz: 22, bflar: 40
    SQLFetch: row: 24, column 2, bflsz: 22,  bflar: 40, (bfl: 0, mbl: 20)
    SQLFetch: row: 24, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 24, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 25, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 25, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 25, column 2, bflsz: 22, bflar: 8
    SQLFetch: row: 25, column 2, bflsz: 22,  bflar: 8, (bfl: 0, mbl: 20)
    SQLFetch: row: 25, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 25, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 26, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 26, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 26, column 2, bflsz: 22, bflar: 32
    SQLFetch: row: 26, column 2, bflsz: 22,  bflar: 32, (bfl: 0, mbl: 20)
    SQLFetch: row: 26, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 26, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 27, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 27, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 27, column 2, bflsz: 22, bflar: 40
    SQLFetch: row: 27, column 2, bflsz: 22,  bflar: 40, (bfl: 0, mbl: 20)
    SQLFetch: row: 27, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 27, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 28, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 28, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 28, column 2, bflsz: 22, bflar: 12
    SQLFetch: row: 28, column 2, bflsz: 22,  bflar: 12, (bfl: 0, mbl: 20)
    SQLFetch: row: 28, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 28, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 29, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 29, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 29, column 2, bflsz: 22, bflar: 2
    SQLFetch: row: 29, column 2, bflsz: 22,  bflar: 2, (bfl: 0, mbl: 20)
    SQLFetch: row: 29, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 29, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 30, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 30, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 30, column 2, bflsz: 22, bflar: 2
    SQLFetch: row: 30, column 2, bflsz: 22,  bflar: 2, (bfl: 0, mbl: 20)
    SQLFetch: row: 30, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 30, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 31, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 31, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 31, column 2, bflsz: 22, bflar: 6
    SQLFetch: row: 31, column 2, bflsz: 22,  bflar: 6, (bfl: 0, mbl: 20)
    SQLFetch: row: 31, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 31, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 32, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 32, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 32, column 2, bflsz: 22, bflar: 12
    SQLFetch: row: 32, column 2, bflsz: 22,  bflar: 12, (bfl: 0, mbl: 20)
    SQLFetch: row: 32, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 32, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 33, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 33, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 33, column 2, bflsz: 22, bflar: 8
    SQLFetch: row: 33, column 2, bflsz: 22,  bflar: 8, (bfl: 0, mbl: 20)
    SQLFetch: row: 33, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 33, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 34, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 34, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 34, column 2, bflsz: 22, bflar: 6
    SQLFetch: row: 34, column 2, bflsz: 22,  bflar: 6, (bfl: 0, mbl: 20)
    SQLFetch: row: 34, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 34, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 35, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 35, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 35, column 2, bflsz: 22, bflar: 12
    SQLFetch: row: 35, column 2, bflsz: 22,  bflar: 12, (bfl: 0, mbl: 20)
    SQLFetch: row: 35, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 35, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 36, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 36, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 36, column 2, bflsz: 22, bflar: 8
    SQLFetch: row: 36, column 2, bflsz: 22,  bflar: 8, (bfl: 0, mbl: 20)
    SQLFetch: row: 36, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 36, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 37, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 37, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 37, column 2, bflsz: 22, bflar: 6
    SQLFetch: row: 37, column 2, bflsz: 22,  bflar: 6, (bfl: 0, mbl: 20)
    SQLFetch: row: 37, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 37, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 38, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 38, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 38, column 2, bflsz: 22, bflar: 12
    SQLFetch: row: 38, column 2, bflsz: 22,  bflar: 12, (bfl: 0, mbl: 20)
    SQLFetch: row: 38, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 38, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 39, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 39, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 39, column 2, bflsz: 22, bflar: 8
    SQLFetch: row: 39, column 2, bflsz: 22,  bflar: 8, (bfl: 0, mbl: 20)
    SQLFetch: row: 39, column 3, bflsz: 152, bflar: 0
    SQLFetch: row: 39, column 3, bflsz: 152,  bflar: 0, (bfl: 0, mbl: 150)
    SQLFetch: row: 40, column 1, bflsz: 6, bflar: 6
    SQLFetch: row: 40, column 1, bflsz: 6,  bflar: 6, (bfl: 0, mbl: 3)
    SQLFetch: row: 40, column 2, bflsz: 22, bflar: 38
    SQLFetch: row: 40, column 2, bflsz: 22,  bflar: 38, (bfl: 0, mbl: 20)
    SQLFetch: row: 40, column 3, bflsz: 152, bflar: 298
    SQLFetch: row: 40, column 3, bflsz: 152,  bflar: 298, (bfl: 0, mbl: 150)
    40 rows fetched
    Exiting hgoftch, rc=0 at 2014/10/02-11:15:42 with error ptr FILE:hgoftch.c LINE:740 ID:Fetch resultset data
    Entered hgoftch, cursor id 1 at 2014/10/02-11:15:42
    hgoftch, line 130: Printing hoada @ 03705518
    MAX:3, ACTUAL:3, BRC:40, WHT=5 (SELECT_LIST)
    hoadaMOD bit-values found (0x40:TREAT_AS_NCHAR)
    DTY         NULL-OK  LEN  MAXBUFLEN   PR/SC  CST IND MOD NAME
    12 VARCHAR Y          4          3 128/  3 1000   0  40 MANDT
    12 VARCHAR Y          6         20 128/ 20 1000   0  40 NAME
    12 VARCHAR Y          0        150 128/150 1000   0  40 NAME_150
    0 rows fetched
    Exiting hgoftch, rc=1403 at 2014/10/02-11:15:42
    Entered hgoclse, cursor id 1 at 2014/10/02-11:15:46
    Exiting hgoclse, rc=0 at 2014/10/02-11:15:46
    Entered hgodafr, cursor id 1 at 2014/10/02-11:15:46
    Free hoada @ 03705518
    Exiting hgodafr, rc=0 at 2014/10/02-11:15:46
    Entered hgocomm at 2014/10/02-11:15:46
    keepinfo:0, tflag:1
       00: 4F52434C 55544638 2E376265 35343664  [ORCLUTF8.7be546d]
       10: 392E312E 32362E36 3630               [9.1.26.660]
                     tbid (len 23) is ...
       00: 4F52434C 55544638 5B312E32 362E3636  [ORCLUTF8[1.26.66]
       10: 305D5B31 2E345D                      [0][1.4]]
    cmt(0):
    Entered hgocpctx at 2014/10/02-11:15:46
    Exiting hgocpctx, rc=0 at 2014/10/02-11:15:46
    Exiting hgocomm, rc=0 at 2014/10/02-11:15:46
    Entered hgolgof at 2014/10/02-11:15:46
    tflag:1
    Exiting hgolgof, rc=0 at 2014/10/02-11:15:46
    Entered hgoexit at 2014/10/02-11:15:46
    Exiting hgoexit, rc=0

  • SAP HANA Input parameters in SAP BO Analysis

    Hi, dear experts.
    Please help me with one problem in SAP BO Analysis. Problem is: in sap hana studio i made calculation view that has some input parameters (Param. type - Direct, Sematic Type - Date, Data type - Date). Then i made filter with this parameters in one of projection of my calculation view. When i push data preview in Hana Studio all input parameters are ready to input and they also have "Value help dialog". All is great! SAP HANA Studio version is 1.0.7000 (Build id: 386119). HANA DB version is 1.00.70.00.386119.
    On workstation i installed SAP BO Analysis edition for Microsoft Office 32-bit and driver for SAP HANA 70 Client Win32, made ODBC connection to HANA server. From SAP BO Analysis I found my view and started it. I see my input parameters, and see "value help dialog", but i can't choose value: it's not available for input. Here is the problem: what i have to do to enter the parameters?
    I also try to use variable but value help dialog is empty. Please help me with this issue.
    Message was edited by: Tom Flanagan

    Hello Andrei,
    Tested on:
    HANA Rev 70
    Have tested for variables and the value help is coming.
    Regards,
    Krishna Tangudu

  • Which programming is used in SAP HANA? confused b/w ABAp or java?

    Can anyone just tell me As a programmer in which language i have to work on SAp HANA

    Hi Jagaa,
    that really depends on what you'd like to do. But a brief answer:
    a) If you develop ABAP applications - use ABAP (as for other databases) and Open SQL / CDS, and if additional HANA functionality is needed maybe a bit of native SQL (ADBC or ABAP Managed DB Procedures).
    b) If you develop natively on the database please contact the experts on SAP HANA Developer Center.
    Cheers,
    Jasmin

Maybe you are looking for

  • Hard Drive Died - After Rebuilding Wiki & Blog Not Working

    My Snow Leopard Server hard drive died. I did a clean install of Snow Leopard on a new drive. I then created fresh accounts and settings similar to my old server manually. I then copied over content from a backup. I copied over the following folders:

  • 16:9 video recording?

    Does the iPhone 3GS record widescreen? If so, how? I thought all you had to do was flip the camera to landscape but after doing so, the videos still show up 4:3 Thanks

  • Wifi authentication: RADIUS or LDAP?

    I'm planning on installing an Aruba 2400 WLAN switch in our Netware 6 network, for purposes of providing wireless network connectivity. The Aruba supports authentication via RADIUS or LDAP. Both are available to me (LDAP in NW6, RADIUS in BMgr 3.7, w

  • Please help on binding variables

    Hello, I need to bind the variable in following function. If the single deptno is passed into function, the function will return correct result. However, if the set of deptno is passed, the function cannot give correct result. Could anyone please hel

  • RTF Text - PLain Text

    Can I use the swing.text.rtf library to convert RTF text into plain text? If so, what's a quick way to do so? What I need to do is parse some given string of RTF code to see if there is any actual text inside (other than just RTF commands) I wanted t