PoC in HANA

Hi, I want to create a PoC in HANA, any videos/links to provide help how do I go about doing this?

Tulsi,
     No one teach you how to deploy HANA for free, you best options is go through the HANA tutorial (see link below).
Official Product Tutorials – BI for SAP HANA
Regards,
Ajay

Similar Messages

  • Planning to start HANA POC on BW

    Hi All,
    We are planning to do HANA POC with 'SAP NetWeaver 7.3 BW powered by SAP HANA' .
    Kindly suggest what are the basic steps and pre-requesite's to be considered.

    Sunil ,
    Find out the USE Case ....... If you are identifying Use Case means 50% job is done...
    See below blog , Hope it helps.
    Estimation for BW on HANA Migration ( Landscape Migration)

  • Trying to go for POC (BW ON HANA) is such POC  posssible ?

    hello friends ,
    i am extremely new to this community , thus would like to have some guidance from experienced people like you for making poc (bw on hana).
    my query is ,
    1)what factors needs to be proved while making POC ?
    2) from where i shall get  such huge data and where shall i store it ?
    3) Should the  POC be  , about saving of replication time or compressing of data?
    4)should the data be replicated once or periodically it needs to be done.?
    also i would appreciate if anybody can share the steps of making POC (BW ON HANA ),
    i am not asking for any confedential data , but the basic steps to go for  POC .
    KINDLY HELP ME ON THIS.
    THANKS
    MARTIN MATHEW

    HI martin mathew,
    Just to understand a bit more:
    Are you trying to do a POC for yourself, then probably I would say it might not be possible.
    There are a couple of ways in doing POC.
    1) Normally SAP or its partners do a POC for the client, where in they consume the client Quality or Production data to showcase the customer some of the advantages of the product.
    EX: Through a POC, a vendor can show the customer that using HANA, the query execution time of  a complex report can be reduced to 2 minutes from 1 hour.
    I would say POC has 2 advantages:
    a) Can win the confidence of the customer
    b) Can showcase the power of the product.
    1) what factors needs to be proved while making POC ?
    --> Purely depends on the environment,landscape and the customer requirements
    2) from where i shall get  such huge data and where shall i store it ?
    --> In most case, customer quality or production data is consumed
    3) Should the  POC be about saving of replication time or compressing of data?
    --> Purely depends on the environment,landscape and the customer requirements
    4) Should the data be replicated once or periodically it needs to be done.?
    --> Purely depends on the environment,landscape and the customer requirements
    These are my view points.
    Let's see if some our colleagues here wants to explain more on these topics...
    BR
    Prabhith

  • Does sap provide fully loaded HANA sandbox for POC

    Hi,
    One of our potential customer looking out for a POC but they wonder wether SAP provides a fully loaded sap hana sandbox where could carry out a PoC.
    Any input will help.
    thanks,
    Tilak

    Tilak:
    What is the objective of your POC?
    If you are trying to stress test HANA with high volume of data - then you need to speak to your SAP rep to get a performance tuned sandbox. However, if you are not too much concerned about stress testing but would like to show-case the HANA solution to your IT department showcasing steps involved in modelling a HANA solution, etc.. then you can use the AWS Amazon  environment.  I have done few POCs in my own AWS environment with the latest and greatest backend and front-end patches.
    Hope this helps. Do let me know if you have any specific questions.
    Regards,
    Rama

  • How do you get your experts with negative stance for new things / innovations to HANA

    We have over 50 ABAP developer (senior experts). Primarily we develop in the old core module (SD, MM, FI, CO, HR, PP, CS, IH, PS) on ERP systems / business suite.
    We have three groups of developer:
    Group 1: They can’t await to work on new architectures – they’re open for all and have fun to work as a pioneer and dig in the deep of the system
    Group 2: For this developer it’s all the same – for this people it’s not a problem to go to a other architectures
    Group 3: They have no interest
    to work in new architectures
    to spend time to learn new things
    they are very closed for new things
    they have for all topics bad statements
    I am part of the group 1. In my opinion in the IT it’s normal to spend much time at new topics in free time to keep up to date. New topics / innovative things make the developer job very exciting. For me it’s regular process – and that’s my own passion .
    Since two month we have our own HANA system in our data centre as play field :-) (business suite on SAP HANA). I’ve some colleagues who made the HANA certification – and we made the first steps in our system. For group 1 and group 2 everything is okay and they’ve fun .
    We have problems with the group 3. They find every hair in the soup – they spend very much time to search arguments against HANA. That’s our “negative group” . We copied our SAP System to a new system and made a technical migration. Now they compare the SAP System, which is based on an oracle datebase, with the new SAP System which is based on a HANA System. They go through the standard ERP process (offer / order / purchase order / goods movements / delivery / MM invoice / SD invoice / material master data / customer master data / vendor master data / conditions / financial bookings / etc.). They main argument is, that they can’t see a grow up of the performance / the added value of the invest / etc. Our other problem is that the group of this people have experience over 20 years in ABAP developing – and their opinion have a high weight. The other arguments: IBM and oracle are working on similar architectures – and we can hold on on the open sqlsyntax / on the present coding.
    Have you similar problems to get the acceptance of group 3?
    Have you tips / tricks for us?
    Have you ideas for catching the group 3?
    What standard components are really optimized for HANA?
    In which standard components can we see a really performance grow up?
    There are standard use cases to see the differences?
    Which data volume do we need in the data model to see the differences?
    What can we do to take the group 3 with us?
    How can we open the group 3 for innovations?

    Hi,
    Please find my reply below.
    1. Have you similar problems to get the acceptance of group 3?
    In IT world we have similar groups. Only results help this group3. I would suggest showcasing on results.
    2. Have you tips / tricks for us?
    As you mentioned in your email, you already have HANA System to play with. So I would suggest looking at high performance transactions like MRP run and FICO Month-end Close.
    3. Have you ideas for catching the group 3?
    SAP has few use cases. Take-up these use cases build data models and use in ERP.
    4. What standard components are really optimized for HANA?
    Recently in our organization we replaced database from DB2 with HANA. Now our SAP ECC is running on HANA. Straight away we have seen 30% of performance improvement in all the transactions. SAP is providing optimized SAP Transactions and there is significant performance improvement and SAP Road Map clearly talks about providing more optimized SAP Transactions on top of HANA to its customers.
    We tried to push some of our high performance code to DB layer by creating the Data Models and using those views in SAP ABAP Programs or Transactions. This gave us significant performance improvement.
    Please refer to below document. This has list of standard optimized transactions.
    Link
    5. In which standard components can we see a really performance grow up?
    This blog throws light on following high performance transactions.
    Link
    6. There are standard use cases to see the differences?
    Use Case
    7. Which data volume do we need in the data model to see the differences?
    MRP run transaction has huge performance issue you can work on MRP Run data model.
    8. What can we do to take the group 3 with us?
    Only Proof of Concepts (POC) results will help to take this group.
    9. How can we open the group 3 for innovations?
    As per your email, I feel group 3 is most demotivated group. I would suggest to talk to each individual and understand there areas of interest and try to put them in their areas of interest. As per my experience this will give good results.
    -VJ.

  • SAP HANA modelling Standalone

    Hello Experts,
    We are in the process of HANA Standalone implementation and design studio as reporting tool. When I am modeling, I did not figure out answers to some of the below questions .Below are the questions. Experts, please help.
    Best way of modeling: The SAP HANA LIVE is completely built on calculation view; there are no Attribute and Analytical views. I have got different answer why there is only Calculation view and there are no Alaytic view and Attribute views. We are in SP7 latest version. This is a brand new HANA in top of non-SAP (DB2 source).  What is the best way to model this scenario, meaning, can we model everything in the Calculation view’s like SAP HANA live or do you suggest using the standard attribute, analytical and calculation views to do the data model. Is SAP moving away from AV & AT to only calculation Views to simply the modeling approach?
    Reporting: We are using the design studio as front end tool. Just for example, if we assume that we are
    Using the BW, we bring all the data in to BW from different sources, build the cubes and use the bex query. Here in bex query we will be using the restricted key figures, calculated key figures calculations etc. From the reporting wise, we have the same requirements, calculations, RKF, CKF,Sum, Avg etc. if we are Using the design studio on top of standalone HANA, where do I need to implement all these calculations? Is it in different views?  (From reporting perspective, if it’s BW system, I would have done all the calculations in BEx.)
    Universe: If we are doing all the calculations in SAP HANA like RKF. CKF and other calculations , what is the point in having additional layer of universe , because the reporting compnets cam access the queries directly on views .In one of our POC , we found that the using universe affect performance.
    Real time reporting: Our overall objective is to give a real time or close to real time reporting requirements, how data services can help, meaning I can schedule the data loads every 3 or 5 min to pull the data from source. If I am using the Data services, how soon I can get the data in HANA, I know it depends on the no of records and the transformations in between the systems & network speed. Assuming that I will schele the job every 2 min and it will take another 5 min to process the Data services job , is it fair to say the my information will be available on the BOBJ tools with in 10 min from the creation of the records.
    Are there any new ETL capabilities included in SP7, I see some additional features included in SP7. Is some of the concepts discussed are still valid, because in SP7 we have star join concept.
    Thanks
    Magge

    magge kris wrote:
    Hello Experts,
    We are in the process of HANA Standalone implementation and design studio as reporting tool. When I am modeling, I did not figure out answers to some of the below questions .Below are the questions. Experts, please help.
    Best way of modeling: The SAP HANA LIVE is completely built on calculation view; there are no Attribute and Analytical views. I have got different answer why there is only Calculation view and there are no Alaytic view and Attribute views. We are in SP7 latest version. This is a brand new HANA in top of non-SAP (DB2 source).  What is the best way to model this scenario, meaning, can we model everything in the Calculation view’s like SAP HANA live or do you suggest using the standard attribute, analytical and calculation views to do the data model. Is SAP moving away from AV & AT to only calculation Views to simply the modeling approach?
    >> I haven't read any "official" guidance to move away from typical modeling approach, so I'd say stick with the usual approach- AT, then AV, then CA views. I was told that the reason for different approach with HANA Live was to simplify development for mass production of solutions.
    Reporting: We are using the design studio as front end tool. Just for example, if we assume that we are
    Using the BW, we bring all the data in to BW from different sources, build the cubes and use the bex query. Here in bex query we will be using the restricted key figures, calculated key figures calculations etc. From the reporting wise, we have the same requirements, calculations, RKF, CKF,Sum, Avg etc. if we are Using the design studio on top of standalone HANA, where do I need to implement all these calculations? Is it in different views?  (From reporting perspective, if it’s BW system, I would have done all the calculations in BEx.)
    >> I'm not a BW guy, but from a HANA perspective - implement them where they make the most sense. In some cases, this is obvious - restricted columns are only available in Analytic Views. Hard to provide more complex advice here - it depends on your scenario(s). Review your training materials, review SCN posts and you should start to develop a better idea of where to model particular requirements. (Most of the time in typical BI scenarios, requirements map nicely to straightforward modeling approaches such as Attribute/Analytic/Calculations Views. However, some situations such as slowly-changing dimensions, certain kinds of calculations (i.e. calc before aggregation with BODS as source - where calculation should be done in ETL logic) etc can be more complex. If you have specific scenarios that you're unsure about, post them here on SCN.
    Universe: If we are doing all the calculations in SAP HANA like RKF. CKF and other calculations , what is the point in having additional layer of universe , because the reporting compnets cam access the queries directly on views .In one of our POC , we found that the using universe affect performance.
    >>> Depends on what you're doing. Universe generates SQL just like front-end tools, so bad performance implies bad modeling. Generally speaking - universes *can* create more autonomous reporting architecture. But if your scenario doesn't require it - then by all means, avoid the additional layer if there's no added value.
    Real time reporting: Our overall objective is to give a real time or close to real time reporting requirements, how data services can help, meaning I can schedule the data loads every 3 or 5 min to pull the data from source. If I am using the Data services, how soon I can get the data in HANA, I know it depends on the no of records and the transformations in between the systems & network speed. Assuming that I will schele the job every 2 min and it will take another 5 min to process the Data services job , is it fair to say the my information will be available on the BOBJ tools with in 10 min from the creation of the records.
    Are there any new ETL capabilities included in SP7, I see some additional features included in SP7. Is some of the concepts discussed are still valid, because in SP7 we have star join concept.
    >>> Not exactly sure what your question here is. Your limits on BODS are the same as with any other target system - doesn't depend on HANA. The second the record(s) are committed to HANA, they are available. They may be in delta storage, but they're available. You just need to work out how often to schedule BODS - and if your jobs are taking 5 minutes to run, but you're scheduling executions every 2 minutes, you're going to run into problems...
    Thanks
    Magge

  • Analysis for Office 1.4.7 Can't do calculation based on HANA data

    Hi,
    We are currently using Analysis for office 1.4.7 based on HANA data. When we try to use calculation, we got following error.
    Nested exception. See inner exception below for more details:
    Unable to execute SQL statement (CREATE_COLUMN_VIEW) \: CREATE OLAP SCENARIO '<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
    <cubeSchema operation="createCalculationScenario" version="3">
        <calculationScenario name="POC-zmlf-fico-histcopatma-pkg/CALV_SHCOPATMA_MUL_1408373161431" schema="_SYS_BIC">
            <scenarioHints createInMemoryOnly="false"/>
            <dataSources>
                <analyticDataSource view="POC-zmlf-fico-histcopatma-pkg/CALV_SHCOPATMA_MUL" schema="_SYS_BIC" name="columnViewSource">
                    <attributes>
                        <allAttribute/>
                    </attributes>
                </analyticDataSource>
            </dataSources>
            <calculationViews>
                <aggregation defaultViewFlag="true" name="defaultAggregation">
                    <inputs>
                        <input name="columnViewSource"/>
                    </inputs>
                    <viewAttributes>
                        <allViewAttribute/>
                    </viewAttributes>
                    <keyfigures>
                        <allKeyfigure/>
                         <calculatedKeyfigure datatype="double" aggregationType="sum" name="[Measures]_Formula1">
        <formula>("G_QVVNTW" - "ZKGPSTD")</formula>
    </calculatedKeyfigure>
                    </keyfigures>
                </aggregation>
            </calculationViews>
        </calculationScenario>
    </cubeSchema>' (ERROR [S1000] [SAP AG][LIBODBCHDB32 DLL][HDBODBC32] General error;258 insufficient privilege: Not authorized)
    Currently the account we use has select access to "_SYS_BIC". What else do we need in term of priviledge?
    Thanks.
    Terry

    Hi Tammy,
    We can do calculations now with Create Scenario" system privilege.
    The issue I am facing was what objects we can do calculation and what objects I can't or should not.
    Example:
         I can do calculation between kilos and Net Weight(Kilos) since both are quantity. Once I bring in Gross Profit at Standard which is dollar value into my calculation, I got "No applicable data found".

  • Accessing a Hana Database through a native iPad application

    Hi Guys,
    I was wondering if there was a way by which a connection can be established to retrieve data from a Hana Database through an iPad application .
    Our team has to build a POC (timelines are very crunchy, about 1 week to build it and get it up and running) on the iPad .
    Please help.
    ~
    Shrayas
    I065680.

    Actually we at Approyo have an Ipad application that we run SAP and all the solutions in a native format.
    http://www.approyo.com/#!projects/c1vw1
    see the photo's attached and the web link...Hope they help

  • How beneficial it is to have EP system database on HANA?

    Dear HANA Experts,
    I would like to know the benefits of implementing EP 7.4 on HANA.
    As far as I know, there will be less Database accesses during Java Enterprise Portal system operations.
    How beneficial it is to have EP 7.4 database on HANA?
    Please provide pointers to know more information regarding this topic.
    During POC, one benefit which we have observed is that Portal system came up in less than 2 minutes on HANA DB where as it is taking around 7 minutes in MSSQL DB !!
    Thanks,
    John Prabhakar

    I currently have 11 1G system files which I'd like to combine into say 3 4GB system data files, or 2 6GB system data files.WHY is any change needed?
    What is gained by doing so?
    Do you suffer from CTD?
    Never confuse movement with progress.
    Going around in circles is movement, but most folks do not consider it to be progress,

  • What if HANA was used to process Large Hadron Collider (LHC) Data?

    What if you want to record and analyse the data about the particles which are accelerating at 99.99% speed of light?
    Watch this video on how data are collected and processed at LHC
    Please Download following presentation which provides more insight on Information System which was used in LHC
    https://cms-docdb.cern.ch/cgi-bin/PublicDocDB/RetrieveFile?docid=6057&version=4&filename=CHEP-2012-Lucas-Taylor-CMS-Information-Systems-final.pdfhttps://csc.web.cern.ch/CSC/2013/iCSC2013/Right_menu_items/Handouts/Per_lecture_pdf_files/07_Lecture_07-Grid-Interpretation-1_slide_per_page.pdf
    https://cms-docdb.cern.ch/cgi-bin/PublicDocDB/RetrieveFile?docid=6057&version=4&filename=CHEP-2012-Lucas-Taylor-CMS-Information-Systems-final.pdf
    To know more about LHC visit following links
    https://www.youtube.com/watch?v=Nqp43FPx414
    http://home.web.cern.ch/about/computing/processing-what-record
    http://en.wikipedia.org/wiki/Large_Hadron_Collider
    https://csc.web.cern.ch/CSC/2013/iCSC2013/Right_menu_items/Handouts/Handouts.htm
    https://cms-docdb.cern.ch/cgi-bin/PublicDocDB/ListBy?topicid=187
    http://cds.cern.ch/record/922757/files/lhcc-2006-001.pdf
    About Grid Computing
    http://en.wikipedia.org/wiki/Grid_computing

    Thanks Thomas,
    infact SAP is working with Helix nebula. who is researching on use of cloud technologies for LHC Data processing. Using HANA for processing data generated from LHC can help scientists in taking faster decision which can save cost of infrastructure perhaps.
    CERN-LHC Use Case | Helix Nebula
    http://www.helix-nebula.eu/the-partnership
    I found POC done by Michelle Gary which is i think may be helpful.
    Physics Analysis in SAP HANA -- Simple PoC
    Regards,
    Pratik

  • Internal communication error when execute procedure on HANA MPP Cluster

    Hi All,
           I'm executing a hana POC at customer environment, it's a 6 nodes hana cluster (with one master & 5 work node).
    When I create a table cross all nodes, and than execute a procedure with that table, it's failed by internal error
    create columnt table CC.AA (NEWDATE DATE primary key) replica AT all locations;
    create procedure CC.P_INS_ADW_DIM_DATE( )
    LANGUAGE SQLSCRIPT
    SQL SECURITY INVOKER
    AS
    FULLDATE DATA;
    MONTHNUMBER INTEGER;
    YEARNUMBER INTEGER;
    DAYNUMBER INTEGER;
    BEGIN
           FULLDATE:=TO_DATE('19800101','YYYYMMDD');
           INSERT INTO "CC"."AA" CALUES(FULLDATE);
    END;
    CALL CC.P_INS_ADM_DIM_DATE( );
    The error message like below:
    SAP DBTech JDBC: Cannot connect to VolumeID=7 [Cannot connect to host 172.21.36.58:34215 [Connection timed out]]
    172.21.36.58 is internal communication IP address of the cluster node, while is public IP should be 192.168.1.123
    I'm wonder if Hana call data through internal channel, does the port should be 3××003
    And I saw in /etc/hosts internal IP bound to host name, not public IP
    hana003  172.21.36.58
    Can anyone help?
    thanks!

    Hi Experts,
      Did anybody faced same problem mentioned above? how to fix it.
    Thanks,
    Umashankar

  • Integration of Infor/ Lawson M3 and SAP HANA

    Hi Experts,
    We are planning to develop a POC: Integration of Infor/ Lawson M3 with SAP HANA.
    What we are wanting to achieve is to migrate Infor/ Lawson M3 database to SAP HANA and then connect this HANA database(which now has migrated Infor/ Lawson M3 data) with Infor/ Lawson M3 treating Infor/ Lawson M3 as a front-end.
    The whole idea is to use SAP HANA as a database and Infor/ Lawson M3 as the front-end.
    We are at a very initial stage and looking for feasibility of this idea.
    Any suggestions in support of feasibility/non feasibility would be of great help.
    Reply Awaited!
    Best Regards,
    Niyati

    Hi Experts,
    Any help on the above idea.
    Regards
    Niyati

  • SAPUI5 on HANA applications in Portal?

    Hello All,
    I would like to integrate the SAPUI5 applications consuming data from HANA database into SAP portal. I did ask my friend "GOOGLE" about the possibilities and found that one possible way would be like this.
    Note that this iView template is mainly intended for ABAP-based scenarios.
    What does the above statement actually mean? Does that mean, this is possbile for only applications consuming data from ERP?
    If so, can anyone suggest some resource or ideas how it could be achieved with SAPUI5 applications on HANA.
    Any suggestions would be highly appreciated.
    Thanks and Regards
    Sangamesh

    Sangamesh Hanumantha Sangamad wrote:
    Note that this iView template is mainly intended for ABAP-based scenarios.
    What does the above statement actually mean? Does that mean, this is possbile for only applications consuming data from ERP?
    If so, can anyone suggest some resource or ideas how it could be achieved with SAPUI5 applications on HANA.
    Any suggestions would be highly appreciated.
    I had similar questions some time ago and did not get any really exploitable answers till today: Integrating UI5 applications in EP with local system object
    In fact the practical idea is simple: take an UI5-plugins enriched Kepler (Luna?), write a native UI5 application and deploy it on your portal. Integration can be using available templates (you will need an backend loop, I have no idea how to motivate this effort) or URL iView. The context of data consumption within your application is entirely independent from this process and is up to you, in your case its HANA consumption as you mentioned. Of course you will need know-how on it, but its in fact not a portal question, take a look to available how-to's.
    I tryed a similar POC scenario for Fiori launchpad on portal and it worked more or less well for me. For my self it was something to play around with, imho: in current delivered quality (7.4 SP9 with last patches) its nothing I would suggest for a customer of my. There are serious bugs and restrictions on AS Java regarding this topic (e.g. new theme desginer (1890375), integration questions and so on). So if you are thinking about to sell this to somebody, think twice , but this is my very subjective imho.
    cheers

  • SAP HANA Modeling Sample end to end Project

    Dear All,
    I got trained on SAP HANA Modeling and BODS recently, and in process of learning SAP BO. My core is SAP EP and don't have indepth knowledge on SAP BI and Functional aspects.
    I am working on few sample applications but I would like to do a Sample end to end project on real time scenarios/POC's.
    Do you have any sample requirement documents that helps to do a end to end project(real time scenarios) . Does SAP store any piolet project requirement specs for public use which help to gain real time experience.
    Kindly help me on this.
    Thanks & Regards,
    Rajeev Bikkani

    hi look into this
    Role of a Functional Consultant in an End To End Implementation
    1. Functional consultant is expected to generate knowledge about the current business process, design current business flows, study current business processes and its complication, in all we can say getting through with current business setup. Flow diagrams and DFD are prepared, most of the time in Vision format, all this forms the part of AS IS document.
    2. Everything configured has to be documented as per their categories in the form of predefined templates, these have to be then approved by the team leads or who ever the consultant is reporting to.
    3. Mapping and GAP analysis is done for each module, I have seen people defining integration after mapping, gap analysis and configuration is done, but as per my experience in implementation, it is a simultaneous process.
    4. Before starting configuring  future business processes in SAP, the DFD/ERD are prepared, this documentation is called TO BE, which can be also siad as the result of mapping and gap analysis.
    5. Sometimes Functional consultants are also expected to prepare test scripts for testing the configured scenarios.
    6. End user manual and user training is also expected from F.Consultants.
    The project normally starts off  with a Kick off meeting in which the team size, team members, reporting system, responsibilities, duties, methodology, dates and schedules, working hours which have been predicided are formally defined.

  • BW on HANA, Archive data to Hadoop

    Dear All,
    We are planning to start a poc for one of our clients. Below is the scenario.
    use BW on HANA for real time analytics and Hadoop as a cold storage.
    Archive historical data to HAdoop
    report on HANA and Hadoop.
    Access Hadoop data using SDA
    I request you to provide implementation steps if somebody have worked on similar scenario.
    Thanks & Regards,
    Rajeev Bikkani

    Hi Rajeev Bikkani,
                   Currently NLS using HADOOP is not available by default and SAP highly recommends IQ for NLS.If you opt for IQ in longer run it will be easy to maintain and scaling up and also for better query performance.In longer run it will yield you better ROI. Initially HADOOP will be cost efective but amount of time user spends to get this solution will be challenging and later on maintaining it and scaling it up also be very challenging.. So SAP highly recommeds IQ for a NLS. SAP positions Hadoop to handle Big data along with  HANA for mainly handling unstructured data and not for NLS. So please reconsider your option.
    I went  through the link and I dont agree with the point "Archiving was not an option, as they needed to report on all data for trending analysis. NLS required updating the info providers and bex queries to access NLS data. SAP offers Sybase IQ as a NLS option but it doesn't come cheap with a typical cost well over a quarter million dollars."
    Because you can query on the archived data and it doesnt need data to be written back to providers, during runtime data will be picked from NLS and showcased.
    If you still wanted to use HADOOP as NLS , then I can suggest you an process but I have not tried it personnaly.
    1)Extract data selectively from your infoprovider via OHD and keep it in OHD table.
    2)Write the data from the OHD table  to Hbase table .(Check the link below for how to do it.)
    3)Delete the data fron the OHD table.
    4)Whatever data  moved to OHD table should be deleted from the infoprovider by slective deletion.
    5)Now Connect the HANA to HADOPP via SDA and virtualise teh table to which we have written the data.
    6) Then build a view on top of this table and query on it.
    7)HANA view historical data can be combined with BW provider data via open ODS or Composite provider or Transient provider
    Reading and Writing to HADOOP HBASE with HANA XS
    http://scn.sap.com/community/developer-center/hana/blog/2014/06/03/reading-and-writing-to-hadoop-hbase-with-hana-xs
    Hope this helps..
    Thanks & Regards
    A.Dinesh

Maybe you are looking for