SAP HANA Live slow query times
Hi, we have implemented HANA Live, which the SAP's Best Practice and standard Business Content on a lot of Lines of Business: FI, CO, SD, MM...etc...
but when we run reports , the average response time is between 12 and 16 seconds. We used all possible SAP tools. Lumira, BO Crystal Reports, Webi, SAP Design Studio, OLAP Analysis ad BO Explorer, but the response times are everywhere quite slow.
The HANA Live Calculation Views supposed to be already optimized for execution, but when we run them directly on SAP Studio, it gives us 12 seconds on the first run. On the consecutive runs the times improves drastically. I'm thinking that maybe we missing some Server config? It supposed to be already in-memory, so why is it so slow?
Thanks for your ideas.
Hi, thanks for the replies guys.
the view I'm trying is SalesOrderValueTrackingQuery
the report name is "Sales Amount Analysis", which is SAP Design Studio based dashboard
These are the tables used:
SAP_ECC.ADRC
SAP_ECC.MAKT
SAP_ECC.PA0001
SAP_ECC.KNA1
SAP_ECC.T001
SAP_ECC.T006
SAP_ECC.TSPA
SAP_ECC.TSPAT
SAP_ECC.TVAK
SAP_ECC.TVAKT
SAP_ECC.TVKO
SAP_ECC.TVKOT
SAP_ECC.TVTW
SAP_ECC.TVTWT
SAP_ECC.TSAD3T
SAP_ECC.VBAK
SAP_ECC.VBAP
SAP_ECC.VBEP
SAP_ECC.VBFA
SAP_ECC.VBKD
SAP_ECC.VBPA
SAP_ECC.VBUK
SAP_ECC.VBUP
SAP_ECC.VEDA
That's what I got after running it in SQL
Statement 'SELECT * FROM "_SYS_BIC"."sap.hba.ecc/SalesOrderValueTrackingQuery"'
successfully executed in 12.936 seconds (server processing time: 12.444 seconds)
Fetched 1000 row(s) in 1.969 seconds (server processing time: 11 ms 886 µs)
Result limited to 1000 row(s) due to value configured in the Preferences
As you I ran it again and again it takes almost 13 seconds. So either it get's unloaded every time I get disconnected or something different is the issue here.
also, there are 8120 records in the main underlying table VBAK.
the memory usage is also very low
I think we might be missing some basic setting here
Thanks,
Sergio
Similar Messages
-
Hi Experts,
We are planning to implement SAP HANA Live for ERP operational reporting, I saw there are 786 views delivered with this which includes -
1) Query views
2) Reuse Views
3) Private Views
Query views always contains the word "Query" at the end of the view name which helps in identifying it but I am unable to distinguish between reuse and private views. Can anyone tell me how do we distinguish whether a view is a reuse view or a private view?
Thanks in anticipation.
Regards,
VictorHi Ramakrishnan
How do you get access to the 'VDM properties' Tab ?
I don't have it on my own HANA Studio.
I guess this tab is also the good one to assign a view to the proper product/ module hierarchy in the HANA Live Browzer.
Currently, all my custom view fall into the "N/A - Undefined" section, which is highly frustrating !
Is this view dedicated to SAP developpers?
Otherwise do you know how to get access to it.
Thanks.
Best Regards
Stephane -
SAP HANA Live View - VDM Online Documentation
Hello Experts,
I am interested in using SAP HANA Live View for CRM, we are experiencing difficulties in installing SAP HANA Live Browser. Is there a documentation for the VDM available online? Just like BI Content, there are documentation available online where we could see the technical details like the infoobjects, mapping etc.
Regards,
DaSaintHi,
Did you try checking this link https://help.sap.com/saphelp_hba/helpdata/en/9c/382618453244d8aaa9e460a77f5de0/frameset.htm Thanks & Regards A.Dinesh -
I am searching for HANA Live Content views column mapping to SAP tables document. Please let me know where can I find it.
Hi Srinivas,
I guess this is the same question as the one about the SD content, so my answer is the same:
if you have access to the HANA Live RDS documentation which comes as part of the so called Step-by-Step Guide you can find a document RDS_SHL_HANA10V5_Technical_Content_Mapping_EN_XX.xls in the folder structure. To get to this file you need to drill down this path:..RDS SBS Guides\HANA Live\V5\Serv_Enabled\RDS
In that file you find all the tables used in all the RDS content.
Regards
Miklos
Solution and Knowledge Packaging
SAP Labs -
Slow Query time with Function in Group By
I have a PL/SQL function that computes status based on several inputs. When the function is run in a standard query without a group by, it is very fast. When i try to count or sum other columns in the select (thus requiring the Group By), my query response time explodes exponentially.
My query:
SELECT
ben.atm_class( 'DBT', 'CLA' , 6 , 1245 ),
count (distinct ax.HOUSEHOLD_KEY)
FROM
ADM.PRODUCT p,
ADM.ACCOUNT_CROSS_FUNCTIONAL ax
WHERE
ax.month_key = 1245
AND ( ax.PRODUCT_KEY=ADM.P.PRODUCT_KEY )
AND ( ax.HOUSEHOLD_KEY ) IN (6)
group by
p.ptype, p.stype,
ben.atm_class( 'DBT', 'CLA' , 6 , 1245 )
My explain plan for the query with the Group By:
Operation Object Name Rows Bytes Cost Object Node In/Out PStart PStop
SELECT STATEMENT Optimizer Mode=CHOOSE 3 10
SORT GROUP BY 3 60 10
NESTED LOOPS 3 60 6
TABLE ACCESS BY LOCAL INDEX ROWID ACCOUNT_CROSS_FUNCTIONAL 3 33 3 23 23
INDEX RANGE SCAN NXIF312ACCOUNT_CROSS_FUNCTION 3 2 23 23
TABLE ACCESS BY INDEX ROWID PRODUCT 867 7 K 1
INDEX UNIQUE SCAN PK_PRODUCT_PRODUCTKEY 867
This executes in over 9 minutes.
My query w/o Group by
SELECT
ben.atm_class( 'DBT', 'CLA' , 6 , 1245 ),
ax.HOUSEHOLD_KEY
FROM
ADM.PRODUCT p,
ADM.ACCOUNT_CROSS_FUNCTIONAL ax
WHERE
ax.month_key = 1245
AND ( ax.PRODUCT_KEY=ADM.P.PRODUCT_KEY )
AND ( ax.HOUSEHOLD_KEY ) IN (6)
My explain plan without the Group By:
Operation Object Name Rows Bytes Cost Object Node In/Out PStart PStop
SELECT STATEMENT Optimizer Mode=CHOOSE 3 3
NESTED LOOPS 3 42 3
TABLE ACCESS BY LOCAL INDEX ROWID ACCOUNT_CROSS_FUNCTIONAL 3 33 3 23 23
INDEX RANGE SCAN NXIF312ACCOUNT_CROSS_FUNCTION 3 2 23 23
INDEX UNIQUE SCAN PK_PRODUCT_PRODUCTKEY 867 2 K
This executes in 6 seconds
Any thoughts on why it takes 90 times longer to perform the Group By sort?The plan didn't paste:
no group by:
Operation Object Name Rows Bytes Cost Object Node In/Out PStart PStop
SELECT STATEMENT Optimizer Mode=CHOOSE 3 6
NESTED LOOPS 3 60 6
TABLE ACCESS BY LOCAL INDEX ROWID ACCOUNT_CROSS_FUNCTIONAL 3 33 3 23 23
INDEX RANGE SCAN NXIF312ACCOUNT_CROSS_FUNCTION 3 2 23 23
TABLE ACCESS BY INDEX ROWID PRODUCT 867 7 K 1
INDEX UNIQUE SCAN PK_PRODUCT_PRODUCTKEY 867
group by:
Operation Object Name Rows Bytes Cost Object Node In/Out PStart PStop
SELECT STATEMENT Optimizer Mode=CHOOSE 3 10
SORT GROUP BY 3 60 10
NESTED LOOPS 3 60 6
TABLE ACCESS BY LOCAL INDEX ROWID ACCOUNT_CROSS_FUNCTIONAL 3 33 3 23 23
INDEX RANGE SCAN NXIF312ACCOUNT_CROSS_FUNCTION 3 2 23 23
TABLE ACCESS BY INDEX ROWID PRODUCT 867 7 K 1
INDEX UNIQUE SCAN PK_PRODUCT_PRODUCTKEY 867 -
Slow query times with "contains" and "or"
We're running Oracle 9.2.0.4 on RHEL 3
I have a simple table - "docinfo". I've create a multicolumn Text index for docinfo called "repoidx". I have five cases below with the fourth one being the most difficult to understand. I have a primary key for "docinfo" but do nott have any additional indexes on "docinfo" right now because we're still testing the design. I'm curious about what is magical about using "or" plus "contains" in the same query (case 4).
[case 1 - simple like]
select count(docid)
from sa.docinfo
where
author like '%smith%'
Elapsed: 00:00:00.02
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=1468 Card=1 Bytes=15)
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (FULL) OF 'DOCINFO' (Cost=1468 Card=12004 Bytes=180060)
[case 2 - simple contains]
select count(docid)
from sa.docinfo
where contains(repoidx,'facts')>0
Elapsed: 00:00:01.00
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3905 Card=1 Bytes=12)
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (BY INDEX ROWID) OF 'DOCINFO' (Cost=3905 Card=21278 Bytes=255336)
3 2 DOMAIN INDEX OF 'IDX_DOCINFO_REPOIDX' (Cost=3549)
[case 3 - simple like _and_ simple contains]
select count(docid)
from sa.docinfo
where
contains(repoidx,'facts')>0
and
author like '%smith%'
Elapsed: 00:00:00.02
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3905 Card=1 Bytes= 23)
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (BY INDEX ROWID) OF 'DOCINFO' (Cost=3905 Card=1064 Bytes=24472)
3 2 DOMAIN INDEX OF 'IDX_DOCINFO_REPOIDX' (Cost=3549)
[case 4 - simple like _or_ simple contains]
select count(docid)
from sa.docinfo
where
contains(repoidx,'facts')>0
or
author like '%smith%'
Elapsed: 00:01:37.02
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=1468 Card=1 Bytes= 23)
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (FULL) OF 'DOCINFO' (Cost=1468 Card=32218 Bytes=741014)
[case 5 - simple like union simple contains]
select count(docid)
from sa.docinfo
where
contains(repoidx,'facts')>0
union
select count(docid)
from sa.docinfo
where
author like '%smith%'
Elapsed: 00:00:00.04
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=5581 Card=2 Bytes= 27)
1 0 SORT (UNIQUE) (Cost=5581 Card=2 Bytes=27)
2 1 UNION-ALL
3 2 SORT (AGGREGATE) (Cost=4021 Card=1 Bytes=12)
4 3 TABLE ACCESS (BY INDEX ROWID) OF 'DOCINFO' (Cost=3905 Card=21278 Bytes=255336)
5 4 DOMAIN INDEX OF 'IDX_DOCINFO_REPOIDX' (Cost=3549)
6 2 SORT (AGGREGATE) (Cost=1560 Card=1 Bytes=15)
7 6 TABLE ACCESS (FULL) OF 'DOCINFO' (Cost=1468 Card=12004 Bytes=180060)Case 1:
There is no index on author and it would not be able to use one if there was, due to the leading %, so it does a full table scan, which is still quick, since that is all there is to the query.
Case 2:
It has an index on repoidx, so it uses it and it is quick.
Case 3:
It has an index on repoidx, so it uses it. Since "and" is used, both conditions must be met. It has quckly obtained the results that match the first condition using the index, so it only has to check those rows, not every row in the table, to see if they also match the second condition.
Case 4:
Either condition may be met. It does not have an index on author, so it cannot use an index for that conditiion. Either condition can be met and it cannot duplicate the rows where both conditions are met, so it cannot use the results of one condition to check the other. So, it has to do a full table scan, in order to check every row for either condition, so it is slow.
Case 5:
select count (docid)
from docinfo
where contains (repoidx, 'facts') > 0
union
select count (docid)
from docinfo
where author like '%smith%';is not the same as:
select count (docid)
from (select docid
from docinfo
where contains (repoidx, 'facts') > 0
union
select docid
from docinfo
where author like '%smith%');which is the same as case 4 and therefore just as slow. Your case 5 is just taking the union of 2 numbers, which could result in one row or two rows, depending on whether the numbers happen to match or not. Consider the following:
scott@ORA92> SELECT job, empno
2 FROM emp
3 /
JOB EMPNO
CLERK 7369
SALESMAN 7499
SALESMAN 7521
MANAGER 7566
SALESMAN 7654
MANAGER 7698
MANAGER 7782
ANALYST 7788
PRESIDENT 7839
SALESMAN 7844
CLERK 7876
CLERK 7900
ANALYST 7902
CLERK 7934
14 rows selected.
scott@ORA92> SELECT job, COUNT (empno)
2 FROM emp
3 GROUP BY job
4 /
JOB COUNT(EMPNO)
ANALYST 2
CLERK 4
MANAGER 3
PRESIDENT 1
SALESMAN 4
scott@ORA92> SELECT COUNT (empno)
2 FROM emp
3 WHERE job = 'SALESMAN'
4 /
COUNT(EMPNO)
4
scott@ORA92> SELECT COUNT (empno)
2 FROM emp
3 WHERE job = 'CLERK'
4 /
COUNT(EMPNO)
4
scott@ORA92> SELECT COUNT (empno)
2 FROM emp
3 WHERE job = 'SALESMAN'
4 UNION
5 SELECT COUNT (empno)
6 FROM emp
7 WHERE job = 'CLERK'
8 /
COUNT(EMPNO)
4
scott@ORA92> -- the above is the same as:
scott@ORA92> SELECT 4 FROM DUAL
2 UNION
3 SELECT 4 FROM DUAL
4 /
4
4
scott@ORA92> -- it is not the same as:
scott@ORA92> SELECT COUNT (empno)
2 FROM (SELECT empno
3 FROM emp
4 WHERE job = 'SALESMAN'
5 UNION
6 SELECT empno
7 FROM emp
8 WHERE job = 'CLERK')
9 /
COUNT(EMPNO)
8
scott@ORA92> -- if the numbers are different, you get 2 rows:
scott@ORA92> SELECT COUNT (empno)
2 FROM emp
3 WHERE job = 'ANALYST'
4 UNION
5 SELECT COUNT (empno)
6 FROM emp
7 WHERE job = 'MANAGER'
8 /
COUNT(EMPNO)
2
3
scott@ORA92> -- the above is the same as:
scott@ORA92> SELECT 2 FROM DUAL
2 UNION
3 SELECT 3 FROM DUAL
4 /
2
2
3
scott@ORA92> -- it is not the same as:
scott@ORA92> SELECT COUNT (empno)
2 FROM (SELECT empno
3 FROM emp
4 WHERE job = 'ANALYST'
5 UNION
6 SELECT empno
7 FROM emp
8 WHERE job = 'MANAGER')
9 /
COUNT(EMPNO)
5 -
How to use Tree control like a Tree in SAP HANA Live Browser?
Hello SDN!
I need a Tree control such as following:
As I understand, there is no such standard control in SAPUI5 control library. Is it possible to use this control? If yes, how I can embed it in my app?
Regards,
LevHi Sandip!
Thanks for your tip. I've applied this example to my app. Tree is working now, but this is a little problem - on each expand/collapse and click on treeItem the onAfterRendering() method called. It means that the complete rerender of tree executed which looks not good. How I can avoid this?
Lev -
Hello Experts,
We are in the process of HANA Standalone implementation and design studio as reporting tool. When I am modeling, I did not figure out answers to some of the below questions .Below are the questions. Experts, please help.
Best way of modeling: The SAP HANA LIVE is completely built on calculation view; there are no Attribute and Analytical views. I have got different answer why there is only Calculation view and there are no Alaytic view and Attribute views. We are in SP7 latest version. This is a brand new HANA in top of non-SAP (DB2 source). What is the best way to model this scenario, meaning, can we model everything in the Calculation view’s like SAP HANA live or do you suggest using the standard attribute, analytical and calculation views to do the data model. Is SAP moving away from AV & AT to only calculation Views to simply the modeling approach?
Reporting: We are using the design studio as front end tool. Just for example, if we assume that we are
Using the BW, we bring all the data in to BW from different sources, build the cubes and use the bex query. Here in bex query we will be using the restricted key figures, calculated key figures calculations etc. From the reporting wise, we have the same requirements, calculations, RKF, CKF,Sum, Avg etc. if we are Using the design studio on top of standalone HANA, where do I need to implement all these calculations? Is it in different views? (From reporting perspective, if it’s BW system, I would have done all the calculations in BEx.)
Universe: If we are doing all the calculations in SAP HANA like RKF. CKF and other calculations , what is the point in having additional layer of universe , because the reporting compnets cam access the queries directly on views .In one of our POC , we found that the using universe affect performance.
Real time reporting: Our overall objective is to give a real time or close to real time reporting requirements, how data services can help, meaning I can schedule the data loads every 3 or 5 min to pull the data from source. If I am using the Data services, how soon I can get the data in HANA, I know it depends on the no of records and the transformations in between the systems & network speed. Assuming that I will schele the job every 2 min and it will take another 5 min to process the Data services job , is it fair to say the my information will be available on the BOBJ tools with in 10 min from the creation of the records.
Are there any new ETL capabilities included in SP7, I see some additional features included in SP7. Is some of the concepts discussed are still valid, because in SP7 we have star join concept.
Thanks
Maggemagge kris wrote:
Hello Experts,
We are in the process of HANA Standalone implementation and design studio as reporting tool. When I am modeling, I did not figure out answers to some of the below questions .Below are the questions. Experts, please help.
Best way of modeling: The SAP HANA LIVE is completely built on calculation view; there are no Attribute and Analytical views. I have got different answer why there is only Calculation view and there are no Alaytic view and Attribute views. We are in SP7 latest version. This is a brand new HANA in top of non-SAP (DB2 source). What is the best way to model this scenario, meaning, can we model everything in the Calculation view’s like SAP HANA live or do you suggest using the standard attribute, analytical and calculation views to do the data model. Is SAP moving away from AV & AT to only calculation Views to simply the modeling approach?
>> I haven't read any "official" guidance to move away from typical modeling approach, so I'd say stick with the usual approach- AT, then AV, then CA views. I was told that the reason for different approach with HANA Live was to simplify development for mass production of solutions.
Reporting: We are using the design studio as front end tool. Just for example, if we assume that we are
Using the BW, we bring all the data in to BW from different sources, build the cubes and use the bex query. Here in bex query we will be using the restricted key figures, calculated key figures calculations etc. From the reporting wise, we have the same requirements, calculations, RKF, CKF,Sum, Avg etc. if we are Using the design studio on top of standalone HANA, where do I need to implement all these calculations? Is it in different views? (From reporting perspective, if it’s BW system, I would have done all the calculations in BEx.)
>> I'm not a BW guy, but from a HANA perspective - implement them where they make the most sense. In some cases, this is obvious - restricted columns are only available in Analytic Views. Hard to provide more complex advice here - it depends on your scenario(s). Review your training materials, review SCN posts and you should start to develop a better idea of where to model particular requirements. (Most of the time in typical BI scenarios, requirements map nicely to straightforward modeling approaches such as Attribute/Analytic/Calculations Views. However, some situations such as slowly-changing dimensions, certain kinds of calculations (i.e. calc before aggregation with BODS as source - where calculation should be done in ETL logic) etc can be more complex. If you have specific scenarios that you're unsure about, post them here on SCN.
Universe: If we are doing all the calculations in SAP HANA like RKF. CKF and other calculations , what is the point in having additional layer of universe , because the reporting compnets cam access the queries directly on views .In one of our POC , we found that the using universe affect performance.
>>> Depends on what you're doing. Universe generates SQL just like front-end tools, so bad performance implies bad modeling. Generally speaking - universes *can* create more autonomous reporting architecture. But if your scenario doesn't require it - then by all means, avoid the additional layer if there's no added value.
Real time reporting: Our overall objective is to give a real time or close to real time reporting requirements, how data services can help, meaning I can schedule the data loads every 3 or 5 min to pull the data from source. If I am using the Data services, how soon I can get the data in HANA, I know it depends on the no of records and the transformations in between the systems & network speed. Assuming that I will schele the job every 2 min and it will take another 5 min to process the Data services job , is it fair to say the my information will be available on the BOBJ tools with in 10 min from the creation of the records.
Are there any new ETL capabilities included in SP7, I see some additional features included in SP7. Is some of the concepts discussed are still valid, because in SP7 we have star join concept.
>>> Not exactly sure what your question here is. Your limits on BODS are the same as with any other target system - doesn't depend on HANA. The second the record(s) are committed to HANA, they are available. They may be in delta storage, but they're available. You just need to work out how often to schedule BODS - and if your jobs are taking 5 minutes to run, but you're scheduling executions every 2 minutes, you're going to run into problems...
Thanks
Magge -
SAP Hana FAQ Frequently Asked Questions
Hi All,
SAP have published a fantastic Knowledge Base Article containing an extremely detailed
SAP Hana FAQ - Frequently Asked Questions
Go and download it now, it's here:
2000003 - FAQ: SAP HANA
And then come back to it from time to time, because not everything in there is released yet.
Best regards,
Andy.For the sake of completeness, there are now a number of very useful Hana FAQ OSS Notes, here's the list so far:
. OSS 2000003 - FAQ: SAP HANA
. OSS 2000000 - FAQ: SAP HANA Performance Optimization
. OSS 1640741 - FAQ: "DB users for the DBA Cockpit for SAP HANA"
. OSS 2039883 - FAQ: SAP HANA database and storage snapshots
. OSS 2044468 - FAQ: SAP HANA Partitioning
. OSS 2057595 - FAQ: SAP HANA High Availability
. OSS 1999997 - FAQ: SAP HANA Memory
. OSS 1999880 - FAQ: SAP HANA System Replication
. OSS 2044468 - FAQ: SAP HANA Partitioning
. OSS 2000002 - FAQ: SAP HANA SQL Optimization
. OSS 2073112 - FAQ: SAP HANA Studio
. OSS 1999998 - FAQ: SAP HANA Lock Analysis
. OSS 1905137 - FAQ: SAP HANA - Obsolete tables
. OSS 1999930 - FAQ: SAP HANA I/O Analysis
. OSS 1914584 - SAP HANA Live Browser FAQ
. OSS 2057046 - FAQ: SAP HANA Delta Merges
. OSS 1642148 - FAQ: SAP HANA Database Backup & Recovery
. OSS 2104291 - FAQ - SAP HANA multitenant database containers
. OSS 2100009 - FAQ SAP HANA Savepoints
. OSS 2082286 - FAQ: SAP HANA Graph
. OSS 2081591 - FAQ: SAP HANA Table Distribution
. OSS 2053330 - FAQ: Operations Recommendation on SAP HANA Alerts
If you are interested in Hana Basis related OSS Notes, then the longest list of Hana OSS Notes publicly available on the internet is over here.
Best regards,
Andy -
Anyone tried this - Extract data from HANA Live reuse views into BW?
Hello Experts,
I've read from this blog http://scn.sap.com/community/bw-hana/blog/2014/05/26/go-hybrid--sap-hana-live-sap-bw-data-integration that this scenario
> "Loading of data into BW using Reuse Layer of SAP HANA Live as data source (Extract data from HANA Live reuse views into BW)" is possible.
Does anyone has a step by step guide on how to do this? Can you please share?
Regards,
DaSaintHi DaSaint,
best to check the online documentation
Notes about transferring data from SAP HANA using ODP - Modeling - SAP Library
Best regards,
Andreas -
HANA Live Authorization Assistant
Hi Everyone,
I have a question regarding HANA Live Authorization Assistant
As mentioned in the help.sap.com.
https://help.sap.com/saphelp_hba/helpdata/en/da/28a39e975f4e85a5eb69d20b5668de/frameset.htm
For a selected SAP NetWeaver ABAP user, SAP HANA Live Authorization Assistant generates the analytic privileges based on his/her assigned PFCG authorizations and collects them with the request SELECT object privileges in a role.
It is given that SAP delivers metadata for all the relevant views of the virtual data model, which defines the mapping between the authorization fields of authorization objects and the respective attributes of views.
My question is How will Authorization Assistant know about ABAP authorizations for a ABAP user
Regards,
VIvekGot the answer from the below blog:
http://scn.sap.com/community/services/blog/2014/01/06/hana-live--security-setup
The two tables UST12 and USRBF2 should be replicated into the HANA system.
Regards,
Vivek -
HANA Live on ECC - Should ECC be on Hana DB or any other RDBMS
Hi,
Going through some of the blogs, i need a clarification to install HANA Live.
If the ECC is on RDBMS, should the database be retired and moved to HANA DB or it can work side by side.
Thanks
Hari PrasadHi,
HANA Live is an additional SAP HANA Delivery Unit which you can install on your HANA One server.
HANA Live has SAP delivered Views, that are based on standard SAP Tables.
So you need to replcated those Tables first then import HANA Live.
The best fit here would SAP LT for replication.
you can get more information from here
SAP HANA Live for SAP Business Suite 1.0 – SAP Help Portal Page -
Hi Experts,
I have deployed SAP HANA Live for ECC in my system, but I am not able to import the language file LANG_HCOHBAECC.tgz.
When I try to import, as the same way I have done with the HCOHBAECC.tgz file, I receive the message "Repository: import failed;Import::import(): wrong archive type ET_LANGUAGE_TRANSPORT, you have to use importLanguage() to import this".
How can I import this file to translate the content of HANA Live?
Best Regards,
Renato ElyHello Renato,
the way you chossed via importing DU's directly vom the HANA Studio is not the recommended way for doing that.
Please check the HANA LIVE Admin Guide for some infos.
The Recommended way which is documented there is the usage of "SAP HANA Lifecycle Manager".
This tool automaticaly takes care of Content and Language Import.
For more info please directly check within the Admin Guide, which you can find at:
Download and Deploy Content Package - SAP HANA Live for SAP Business Suite Administrator's Guide - SAP Library
Best Regards
Stefan -
Hello, I have a query about SAP HANA. If I want to bring BSEG table in HANA from ECC but do not want to do Live update(SLT) and do not want to create all the fields that exist in BSEG in ECC one by one. what features of native HANA would I use to do that?
Thanks.Hi Venkat,
You can go with BODS or SLT.
In BODS, you can select required fields from BSEG table in ECC and map the same to HANA table.
You can also go with SLT, do one time load using LOAD option. Even here you can select reruired fields, not all the fields from BSEG table.
Regards,
Chandu. -
SAP SNC Portal DCM screen performance is very slow and times out
Friends,
SAP SNC Portal DCM screen performance is very slow and times out when user trying to pull data using customer location
What are the cleanup activites we can do to improve the overall SNC performance ?
We did open OSS message but so far no reply from SAP , Is there any one faced performance issue ?
User/vendor is complaining about slowness , query is standard SAP and its taking more time .(table - /LIME/NTREE) , It looks like number of data are huge causing this problem related to LIME/NTREE table. What are the options to improve the performance ?
Thanks in Advance
Hanuman Choudharyhi Team,
Pls . note the advise from SAP below, IS there any have experiance of archiveing /LIME records ?
Please advise how to start & what the steps in archiving ?
Thanks in advance
I had a look at the DCM query performance in PH1 system and figured out
that most of the time is spent at the LIME layer of database. The
following LIME tables are having far too many entries and is causing
the bottleneck during the query execution.
/LIME/NLOG_QUAN - 38,165,467
/LIME/PN_ITEM - 19,116,518
/LIME/PN_ITEM_TB - 19,154,124
These tables are storing the historical information about LIME(stock)
updates. Since these table grow with each change/update of stock
information, it will slow down the performance of the system over a
period of time. And to avoid the slow responses, the tables should
ideally be archived on a periodic basis to keep the data volume as
minimal as possible. You may have to discuss with the Business to
determine the number of days of LIME record you would want to retain
in the system. I would strongly recommend you to consider the LIME
archival retaining the minimum days (<=60 days) of historical
information. You can find more information about the Lime Archival
in the Sap Help link:
http://help.sap.com/saphelp_scm2007/helpdata/en/44/2a83121dde23d1e10000000a1553f7/frameset.htm.
Kindly get in touch with your BASIS consultant for the LIME archival.
The application performance should definitely improve after the LIME
archival. Please do not hesitate to get in touch with me in case you
require any further clarification in this regards.
Best Regards
Maybe you are looking for
-
PL/SQL w/ Java to run OS batch file crashes Oracle
I followed instructions from "Ask Tom" on using PL/SQL with Java to execute an OS batch file. For testing purposes, my batch file does nothing more than display the date and time from the OS. However, when I run the PL/SQL that executes this simple b
-
How to make a fld mandatory in module pool program?
I need help on following 2 things I have module pool program created, in that many screen subscreens created. 1.I want to make a field mandatory second one is, I have a general tab and data tab. General tab screen - 2101 Data tab = 2102 In general
-
Is there way to put single text line in each response via WL configuration.
Hello, Our partners are using SunOne server and they can easy include P3P policy file to each response via server configuration. Can we do it with WL? We have more than 3000 JSPs to modify.
-
Got an error while installing oracle 10gR2 on RHEL 4
Hi Everybody, After running ./runInstaller while installing oracle 10gR2 on RHEL4 ,i got an error message showing below can anyone guide me how can i proceed furthur? As Follows:- [oracle@localhost database]$ ./runInstaller Starting Oracle Universal
-
My ipad2 is 18 months old, I'd like to update the iOS but there is no SW update option under the Settings, General menus. How do I update the iOS. Huw