SAP HANA runs based on Reat time processing or Batch Processing

i) SAP HANA runs based on Reat time processing or Batch Processing or any other processing ?
Kindly provide the answer ____________________
ii) Which two new developement technologies led to growth of SAP HANA ?
a.) Columnar Structure
b.) Mulicore processing
c.) RDBMS
Kindly provide the anser _____________
Thanks,
Vigram.

Hi Vigram,
SAP hana runs based on real time, batch... etc,
Also SAP HANA gets benefit of both columnar and Multicore processing.

Similar Messages

  • SAP HANA Web-based Development Workbench Privileges

    I can get into and use the SAP HANA web-based development workbench as the SYSTEM user, but I cannot as other users.
    http://localhost:8005/sap/hana/xs/ide/editor/
    Currently as any other user, I get the following error: "Request execution failed due to missing privileges"
    What are the appropriate privileges that need to be set in SAP HANA Studio to access the web based IDE?
    Currently, just messing around with the privileges in SAP HANA Studio, if I set the following application privileges, I can at least log in as another user besides SYSTEM.
    However, I cannot get any of the files to display in the editor portion of the SAP HANA web-based development workbench. A possibly related error in the log console: "13:10:12 >> Error in WebSocket Connection to backend."
    I found this thread discussing the same issue I believe, but the solution I would imagine is not correct: HANA XS Administration | SAP HANA
    Thanks,
    Kevin

    Hi Fernando,
    Thanks for the suggestion. That seemed to work, though it is strange that it is not one of the *.xs.* application privileges. So, just to be clear, here are the application privileges I have now in order to use the web based IDE:
    Thanks,
    Kevin

  • SAP HANA web based studio?

    Hello,
    Is there a web based tool for administrating SAP HANA like the SAP HANA Studio?
    For developement i found lots of subjects about the web-based development workbench but nothing about a web-based administrating tool.
    Thanks for any help.
    Best regards,
    Hassan khanafer

    i have an impression, but not confirmed by anyone, that SAP is moving with its cloud to provide more and more dba administration functionality onto the web and away from Eclipse, but the ultimate deliverable will be defined by customer demand and that may be many releases or years into the future. i think Amazon is the current leader in that space, but others are quickly catching up.

  • Idoc processing in batch process

    how to process an Inbound idoc in a batch program?
    I am retiving the idoc numbers which needs to be processed how should i go about to process it?
    should i call the standard FM which mapped to it for processing that idoc?
    please suggest?

    Hi,
    You can schedule a background job with the IDoc Numbers for the program: RSEOUT00 with all relevant parameters.
    Hope this helps.
    Regards, Murugesh

  • Report as batch process?

    Can we run a Z program or report as background process or batch process? if yes, how?

    Hi,
    1.  Go to SE38 give the program name and then from Menu path Program->Execute->Background. You need create the variant for selection screen
    2. Go to SE38 give the program name and then execute on the selection screen goto Menu path Program->Execute->Background.

  • Getting an error after executing batch process LTRPRT

    hi we are testing to check how the flat files are created for different customer contacts,for that we had ran the batch process LTRPRT batch process
    it got executed and ended abnormally with an error
    these are the parameters which were submitted during batch submission
    FILE-PATH=d:\spl
    FIELD-DELIM-SW=Y
    CNTL-REC-SW=N
    This is the following error
    ERROR (com.splwg.base.support.batch.GenericCobolBatchProgram) A non-zero code was returned from call to COBOL batch program CIPCLTPB: 2
    com.splwg.shared.common.LoggedException: A non-zero code was returned from call to COBOL batch program CIPCLTPB: 2
    *     at com.splwg.shared.common.LoggedException.raised(LoggedException.java:65)*
    *     at com.splwg.base.support.batch.GenericCobolBatchProgram.callCobolInCobolThread(GenericCobolBatchProgram.java:78)*
    *     at com.splwg.base.support.batch.GenericCobolBatchProgram.execute(GenericCobolBatchProgram.java:38)*
    *     at com.splwg.base.support.batch.CobolBatchWork$DoExecuteWorkInSession.doBatchWorkInSession(CobolBatchWork.java:81)*
    *     at com.splwg.base.support.batch.BatchWorkInSessionExecutable.run(BatchWorkInSessionExecutable.java:56)*
    *     at com.splwg.base.support.batch.CobolBatchWork.doExecuteWork(CobolBatchWork.java:54)*
    *     at com.splwg.base.support.grid.AbstractGridWork.executeWork(AbstractGridWork.java:69)*
    *     at com.splwg.base.support.grid.node.SingleThreadedGrid.addToWorkables(SingleThreadedGrid.java:49)*
    *     at com.splwg.base.support.grid.node.AbstractSingleThreadedGrid.processNewWork(AbstractSingleThreadedGrid.java:49)*
    *     at com.splwg.base.api.batch.StandaloneExecuter$ProcessNewWorkExecutable.execute(StandaloneExecuter.java:590)*
    *     at com.splwg.base.support.context.SessionExecutable.doInNewSession(SessionExecutable.java:38)*
    *     at com.splwg.base.api.batch.StandaloneExecuter.submitProcess(StandaloneExecuter.java:188)*
    *     at com.splwg.base.api.batch.StandaloneExecuter.runOnGrid(StandaloneExecuter.java:153)*
    *     at com.splwg.base.api.batch.StandaloneExecuter.run(StandaloneExecuter.java:137)*
    *     at com.splwg.base.api.batch.StandaloneExecuter.main(StandaloneExecuter.java:357)*
    *18:24:14,652 [main] ERROR (com.splwg.base.support.grid.node.SingleThreadedGrid) Exception trapped in the highest level of a distributed excecution context.*
    what could be the reason?

    you need to specify appropriate folder on the server.
    for e.g. choose +/spl/sploutput/letterextract/+

  • Batch Process Export As and Save As JPEG Question?

    What is the method for creating scaled JPEGs from PNG files with batch processing (File | Batch Process...)?
    Application environment: Fireworks CS4.
    Using History panel, I manually captured instructions
    Modify | Transform | Numeric Transform (13%)
    Modify | Canvas | Fit Canvas
    File | Save As (JPEG)
    History panel's saved JavaScript file script:
    fw.getDocumentDOM().scaleSelection(0.11999999731779099, 0.11999999731779099, "autoTrimImages transformAttributes");
    fw.getDocumentDOM().setDocumentCanvasSizeToDocumentExtents(true);
    fw.saveDocumentAs(null, true);
    Running this sequence as a batch job requires manually selecting JPEG and "okaying"
    CS4 documentation for saveDocumentAs is defined with different argument list: fw.saveDocumentAs(document) .  CS4 documentation also has: fw.exportDocumentAs(document, fileURL, exportOptions) to export JPEG (80% quality).  CS4 feature File | Image Preview... | Export does not create a history record for guidance.  I cannot find an example of either instruction save a file to a local folder in JPEG format.  My objective is to add an additional batch step Rename (prefix) for the JPEG output, and script this a a batch process.

    Joyce,
    Thank you.  Your suggestion helped clear my mental log jam.
    Fireworks batch scripting seems to work in reverse.
    Using "Batch Process..." scripting interface 'Export' (as JPEG) option first, 'Rename' (add Prefix) option second, and then follow these by using the history panel's 2 previously saved steps (Numeric Transform, Fit Canvas), batch process now works.  PNG file (list) is saved in targeted folder as a reduced scale JPEG with a prefixed file name.
    Batch Process allows the entire newly defined command sequence to be saved (script.js).  Placing this (renamed) file into Firework's batch look-up directory for scripts (C:\Documents and Settings\account_name\Application Data\Adobe\Fireworks CS4\Commands) does not work.  The file name does not display in "Batch Process" window "Batch options:" Commands drop-down list.
    Batch Process only works by recreating the above steps each use.
    The new (JavaScript) file is 26 KB.  Is script file size limited in Fireworks "Batch options:" feature?

  • SAP HANA Live slow query times

    Hi, we have implemented HANA Live, which the SAP's Best Practice and standard Business Content on a lot of Lines of Business: FI, CO, SD, MM...etc...
    but when we run reports , the average response time is between 12 and 16 seconds. We used all possible SAP tools. Lumira, BO Crystal Reports, Webi, SAP Design Studio, OLAP Analysis ad BO Explorer, but the response times are everywhere quite slow.
    The HANA Live Calculation Views supposed to be already optimized for execution, but when we run them directly on SAP Studio, it gives us 12 seconds on the first run. On the consecutive runs the times improves drastically. I'm thinking that maybe we missing some Server config? It supposed to be already in-memory, so why is it so slow?
    Thanks for your ideas.

    Hi, thanks for the replies guys.
    the view I'm trying is SalesOrderValueTrackingQuery
    the report name is "Sales Amount Analysis", which is SAP Design Studio based dashboard
    These are the tables used:
    SAP_ECC.ADRC
    SAP_ECC.MAKT
    SAP_ECC.PA0001
    SAP_ECC.KNA1
    SAP_ECC.T001
    SAP_ECC.T006
    SAP_ECC.TSPA
    SAP_ECC.TSPAT
    SAP_ECC.TVAK
    SAP_ECC.TVAKT
    SAP_ECC.TVKO
    SAP_ECC.TVKOT
    SAP_ECC.TVTW
    SAP_ECC.TVTWT
    SAP_ECC.TSAD3T
    SAP_ECC.VBAK
    SAP_ECC.VBAP
    SAP_ECC.VBEP
    SAP_ECC.VBFA
    SAP_ECC.VBKD
    SAP_ECC.VBPA
    SAP_ECC.VBUK
    SAP_ECC.VBUP
    SAP_ECC.VEDA
    That's what I got after running it in SQL
    Statement 'SELECT * FROM "_SYS_BIC"."sap.hba.ecc/SalesOrderValueTrackingQuery"'
    successfully executed in 12.936 seconds (server processing time: 12.444 seconds)
    Fetched 1000 row(s) in 1.969 seconds (server processing time: 11 ms 886 µs)
    Result limited to 1000 row(s) due to value configured in the Preferences
    As you I ran it again and again it takes almost 13 seconds. So either it get's unloaded every time I get disconnected or something different is the issue here.
    also, there are 8120 records in the main underlying table VBAK.
    the memory usage is also very low
    I think we might be missing some basic setting here
    Thanks,
    Sergio

  • Is it possible to schedule a Webi report based on another input, i.e. expose a time in which a process was finished.

    These are Webi, user created reports.
    Is it possible to schedule a Webi report based on another input, i.e. expose a time in which a process was finished (this can be obtained via a SQL call, service call, etc).
    The issue is that if a user schedules a report for 9PM but a process to move data, etc has not completed then that forces the user to keep re-freshing the report hoping that the process has been complete.
    Ideally, in the selection UI add to the dropdown for 'when' the option 'Use Process X completion Time' (since it is a daily report).
    Then starting around 9PM check to see if that 'time" value is populated then refresh (run) the report. Or it could just be a flag that the process has finished.
    User's in the webi environment are asking for this, and moving their reprots to BO is not an option. That's why they have the custom webi enviroment.
    Thank you.

    There is no very good way to create a working event based on a file as the file will need to be deleted as soon as the event is triggered. However there are work around which can be done for this. Here is a link which is discussing about the process.. http://scn.sap.com/thread/1677109
    Once you have created a file based even in CMC > Events> System Events and made a work around program to delete the file once the event is trigged all you would need to do in webi is in launchpad just schedule > Events > select the file based event from all available system events. Here is a screenshot how to make a report run based on a event..

  • SAP HANA One and Predictive Analysis Desktop - Time Series Algorithms

    I have been working on a Proof-of-Concept project linking the SAP Predictive Analysis Desktop application to the SAP HANA One environment.
    I have modeled that data using SAP HANA Studio -- created Analytic views, Hierarchies, etc. -- following the HANA Academy videos.  This has worked very well in order to perform the historical analysis and reporting through the Desktop Application. 
    However, I cannot get the Predictive Analysis algorithms -- specifically the Time Series algorithms -- to work appropriately using the Desktop tool. It always errors out and points to the IndexTrace for more information, but it is difficult to pinpoint the exact cause of the issue.  The HANA Academy only has videos on Time Series Algorithms using SQL statements which will not work for my user community since they will have to constantly tweak the data and algorithm configuration. 
    In my experience so far with Predictive Analysis desktop and the Predictive Algorithms, there is a drastic difference between working with Local .CSV / Excel files and connecting to a HANA instance.  The configuration options for using the Time Series Algorithms are different depending upon the data source, which seems to be causing the issue.  For instance, when working with a local file, the Triple Exponential Smoothing configuration allows for the specification of which Date field to use for the calculation.  Once the data source is switched to HANA, it no longer allows for the Date field to be specified.  Using the exact same data set, the Algorithm using the local file works but the HANA one fails. 
    From my research thus far, everyone seems to be using PA for local files or running the Predictive Algorithms directly in HANA using SQL.  I can not find much of anything useful related to combing PA Desktop to HANA. 
    Does anyone have any experience utilizing the Time Series Algorithms in PA Desktop with a HANA instance?   Is there any documentation of how to structure the data in HANA so that it can be properly utilized in PA desktop? 
    HANA Info:
    HANA One Version: Rev 52.1
    HANA Version: 1.00.66.382664
    Predictive Analysis Desktop Info:
    Version: 1.0.11
    Build: 708
    Thanks in advance --
    Brian

    Hi,
    If you use CSV or XLS data source you will be using Native Algorithm or R
    Algorithm in SAP Predictive Analysis.
    When you connect HANA, SAP Predictive Analysis uses PAL Algorithm which runs
    on HANA server.
    Coming to your question regarding difference,
    In SAP PA Native Algorithm, we could provide the Data variable, Algorithm
    picks the seasonal information from the Data column. Both R and SAP HANA PAL
    does not support Date Column. We need configure seasonal information in
    Algorithm properties.
    R Properties
    1) Period : you need to mention the periodicity of the Data
    Monthly : (12)
    Quarter : (4)
    Custom : you can use it for week or Daily or hourly.
    2) Start Year: need to mention Start year.
    Start year is not used by algorithm for calculating Time series, but it helps
    PA to generate Visualization ( Time series chart) by simulating year and
    periodicity information.
    3) Starting Period:
    if your data is Quarterly and you have data recordings from Q2, mention 2 in
    start period.
    Example.
    If the data periodicity is Monthy and my data starts from Feb 1979, we need to provide following information,
    Period: 12
    Start year: 1979
    start Period: 2
    PAL Properties. : Same as properties defined in R.
    Thanks
    Ashok
    [email protected]

  • Two processes running at the same time in Lookout

    I have installed Lookout 5.0 with 200 I/O Points onto our server computer. The application of motion control is next to the 200 I/O points through OPC PMAC server/client. Now I would like to have a second process in the same server for trouble shooting and testing without stopping the motion control process. However, this second testing process could have also many I/O points through Serial and USB ports. I assume that the total amount of I/O points of both processes will be greater than 200.I prefer to have independent processes for control and testing because access levels. Can I have these two processes running at the same time when needed?

    Hi,
    From your description you are using a third party OPC server for the motion application. You could have a second Lookout process communicating with the same OPC server with no problems, as long as you do not exceed the number of I/O points your license supports.
    Also, the process you are using for testing obviously could not overwrite datamembers (or registers if you will) that would interfere in the overall application, in other words you can test your application as long as you keep the coherency of the test.
    So the answer would be, yes it is possible, however you are still limited to the number of I/O's your license supports... You may even consider upgrade the number of I/O's you have in your license.
    Best Regards
    Andre Oliveira

  • Universe based on SAP HANA OLAP connection

    Can I create universe based on SAP HANA OLAP connection ?
    How can i access SAP HANA view which was created using OLAP connection in WEBI and Dashboard? It does not allow creating Universe on top of SAP HANA OLAP connection and give an error.
    SAP BusinessObjects query and reporting applications can directly connect to OLAP SAP HANA connections. No universe is required, only a published OLAP SAP HANA connection.
    Thanks

    OLAP connection are only meant for tools that make use of OLAP connectivity only like BEx ..and HANA analytic views and calculated views are also treated as OLAP cubes for them..
    So these connection is only available with analysis product like Analysis Office (which is meant to be used with Bex or OLAP cubes ) and Design studio which is again mainly built around OLAP paradigm..
    You can not create a universe or Webi on Olap connection on HANA as of now.. However it could be like exposed in a future release of BO which is not public yet...

  • Hi i have 50 infoobjects as part of my aggregates and in that 10 infoobjects have received changes in masterdata.so in my process chain the Attribute change run in running for a long time.can i kill the job and repeat the same.

    Hi i have 50 infoobjects as part of my aggregates and in that 10 infoobjects have received changes in masterdata.so in my process chain the Attribute change run in running for a long time.can i kill the job and repeat the same.

    Hi,
    I believe this would be your Prod system, so don't just cancel it but look at the job log. If it is still processing then don't kill it and wait for the change run to complete but if you can see that nothing is happening and it is stuck for a long time then you can go ahead and cancel it.
    But please be sure, as these kind of jobs can create problems if you cancel them in the middle of a job.
    Regards,
    Arminder Singh

  • Time Gap in Message Processing in SAP PI

    Dear All,
    We are having soap to Proxy Synchronous ,in this Scnerio non SAP System call SAP PI Soap Service and send data to PI for Processing in ECC.
    We are facing a issue like suppose non sap system sends data to SAP PI Around 11 Am then when we check the same in PI message is not shwoing in PI at the same time but it is showing sometime after 10 min or some time around 30-40 min.
    Please help.

    Hi,
    Check the below .
    How much time does this interface taking in SAP PI ? -->If it is synchronous by default PI will wait for 1 min for response and throws time out exception to target if it doesn't get it.
    There might be the change that sender system and PI machine time zone might be different ,because of that you might feel time difference.
    Check for any n/w issues b/w sender system and PI by identifying where exactly delay is happening .
    Regards
    Venkat

  • Integration Process Config SAP Hana Cloud integration with SuccessFactors

    Dear,
    We are trying to connect SF with our SAP backend, I'm currently reading the following document: "Integration Add-On 3.0 for SAP ERP Human Capital and SF".
    I'm fairly new to SAP so I have a few questions, since I didn't found the answer on the rest of the Community maybe this topic can serve to ask a few questions in my setup process.
    (Questions are in Bold)
    Question 1:
    The document dictates: "Discover the packages for qualification data integration from the SAP HANA Cloud catalog. Once imported, please
    configure them as per your requirements. The integration flows are shown in the following table."
    I tried searching for the SAP HANA Cloud catalog but I didn't found where it is or what it is?
    Question 2:
    Where do you setup the connection between SF and SAP? And do you have to do more customizing in SF (where is this described)?
    Thanks a lot for any information regarding this topic.
    Kind regards
    EDIT 1:
    Any reading suggestions on this topic.

    Hi Barry,
    Both the links work for me. Can you please check if this issue for you is because of your login/authorization?
    Regards,
    Vasanth

Maybe you are looking for

  • While create inbound delviery Material number is not showing

    Hi All, I am trying to create inbound delivery with reference to Purchase Order. While using BBP_INB_DELIVERY_CREATE to create inbound deliveries, the material number is not showing up in the VL33N screen but Material Description is coming. As you ha

  • Livecycle es4: a textfield does not expand beyond the second page

    I am using livecycle designer es4 (trial, before i was using livecycle es 8.1, wich came with adobe pro 9) I have figured out how to expand a textfield, however if people who use it want to type alot of text beyond the page the whole textfield is pla

  • Using classes inside of classes

    I have created a custom sound class called MySoundClass that extends the Sound class... it is declared as class com.stickyMatters.WineGlassPiano.MySoundClass. I also have another class called SongPlayer that is declared as com.stickyMatters.WineGlass

  • SATA 300 disk on SATA 150 controller

    Greetings: A friend of mine has a MacBook Pro with these specs:   Model :          MacBook Pro   Model ID:          MacBookPro1,1   CPU :          Intel Core Duo   CPU Speed :          2 GHz   Number of CPUs :          1   Number of cores :          

  • When returning from sleep my preferred network doesn't connect

    After returning from sleep mode my prefered network won't automatically connect. I checked the settings and its set to automatically connect to my network. I have to turn of the wireless and then turn it back on or select my network before it will co