SAP HANA design time roles replication/transport

Dear Experts,
I have been a keen reader of SCN for many years but only just beginning to ask for specific expertise, so please bear with me if I'm outside standard forum etiquette or procedures!  Any guidance gratefully received there.
Having built design time roles in one system, I am curious if there is a way to replicate these in other systems now.  The latest discussion I could find on any transport possibilities seemed limited to producing .hdbroles, which have their own limitations although transporting isn't one of them.  I would prefer to maintain the roles as design time types only however does this mean that they must be manually replicated directly in each system, or if it is possible to create an SQL from the design time role, to then execute in the target system, but still allowing graphical maintenance of th roles?
Many thanks in advance for your thoughts and experienced feedback.
Kind Regards,
Mark Simmet

Hi Vivek,
Thanks for taking the time to enlighten me here.  The only limitations of the design time roles I could see in relation to the category type role was my perceived inability to transport/replicate them. 
If I build in say CRM DEV and want to replicate to other DEVs, I understand now that this is possible through the import of the relevant delivery unit.
I was cautious in attaching these to something for export prior to confirmation from someone with experience of having done so.  Documentation around this appears to be quite limited with only some old threads discussing the transport/replication efforts required of HANA roles.
My preference is also design time over catalog from what I've seen so far, and as we're on SPS09 it's possible to edit in the graphical interface.
I'll look into your suggestions on delivery units for import/transport options.
Once again, many thanks for taking the time to reply - very much appreciated.
Regards,
Mark

Similar Messages

  • GP Design Time options unavailable

    I was trying out the Search SAP Webpages Process in Guided procedures.
    I get to see the Guided Procedures Role,
    But in the You Can section below the gallery where we can make callable objects, its blank. I cannot see anything. Am i missing something here?
    Basically I dont get any options to perform in GP Design Time role.

    Hi
    What GP roles did you assign to the user. There are around 14 GP roles that come with portal. Let me know what roles did you assign ??
    Thanks
    Lakshmi

  • SAP HANA Load Controller

    Dear HANA Guru,
    Help needed,  Can you provide some pointer on Load Controller ?
    Regards,
    Manoj.

    Hello,
    like what it is good for?
    It is part of Sybase Replication scenario and it controls replication. You can read more in SAP Master Guide:
    page 21 - chapter 2.4 Log-Based Replication
    https://service.sap.com/~sapidb/011000358700000604552011
    The SAP HANA Load Controller, a component that resides in SAP HANA, coordinates the entire replication process: it starts the initial load of source system data to the SAP HANA database in SAP HANA, and communicates with the Sybase Replication Server to coordinate the start of the delta replication.
    Tomas

  • Portal Run time error when created a seperate role for Transport package.

    Hi Experts,
    I have created a seperate role for Transport Package(import/export iviews).
    Normally we have transport package functionality in system admin.
    Below steps i followed for creating the new role(trans admin)
    1.Copied SAP provided system admin role to a seperate folder.
    2.Deleted reamining portal objects(like UWL, portal display etc ..) except transport packege workset.
    3.Renamed the role to trans admin.
    I have assigned that role to my self, it is working fine to me when i clcik on export and import.I have super admin role.
    when i assign this role to some portal users, Export is not working.
    when user clicks on Export role they are getting below error.
    Portal Runtime Error
    An exception occurred while processing a request for :
    iView : N/A
    Component Name : N/A
    Access denied (Object(s): com.sap.portal.system/security/sap.com/NetWeaver.Portal/medium_safety/com.sap.portal.appdesigner.contentcatalog/components/Framework).
    Exception id: 12:10_31/08/09_0031_21763550
    See the details for the exception ID in the log file
    By looking into exception iD also, same error access denied it is showing.
    Please Advice.
    Thanks
    Sony.

    Hi Raghu,
    Thanks for the reply.
    I have given full permissions to all users to this trans admin role before itself.
    Thanks in advance.
    Sony.
    Edited by: ambica sony on Aug 31, 2009 1:53 PM

  • SAP HANA One and Predictive Analysis Desktop - Time Series Algorithms

    I have been working on a Proof-of-Concept project linking the SAP Predictive Analysis Desktop application to the SAP HANA One environment.
    I have modeled that data using SAP HANA Studio -- created Analytic views, Hierarchies, etc. -- following the HANA Academy videos.  This has worked very well in order to perform the historical analysis and reporting through the Desktop Application. 
    However, I cannot get the Predictive Analysis algorithms -- specifically the Time Series algorithms -- to work appropriately using the Desktop tool. It always errors out and points to the IndexTrace for more information, but it is difficult to pinpoint the exact cause of the issue.  The HANA Academy only has videos on Time Series Algorithms using SQL statements which will not work for my user community since they will have to constantly tweak the data and algorithm configuration. 
    In my experience so far with Predictive Analysis desktop and the Predictive Algorithms, there is a drastic difference between working with Local .CSV / Excel files and connecting to a HANA instance.  The configuration options for using the Time Series Algorithms are different depending upon the data source, which seems to be causing the issue.  For instance, when working with a local file, the Triple Exponential Smoothing configuration allows for the specification of which Date field to use for the calculation.  Once the data source is switched to HANA, it no longer allows for the Date field to be specified.  Using the exact same data set, the Algorithm using the local file works but the HANA one fails. 
    From my research thus far, everyone seems to be using PA for local files or running the Predictive Algorithms directly in HANA using SQL.  I can not find much of anything useful related to combing PA Desktop to HANA. 
    Does anyone have any experience utilizing the Time Series Algorithms in PA Desktop with a HANA instance?   Is there any documentation of how to structure the data in HANA so that it can be properly utilized in PA desktop? 
    HANA Info:
    HANA One Version: Rev 52.1
    HANA Version: 1.00.66.382664
    Predictive Analysis Desktop Info:
    Version: 1.0.11
    Build: 708
    Thanks in advance --
    Brian

    Hi,
    If you use CSV or XLS data source you will be using Native Algorithm or R
    Algorithm in SAP Predictive Analysis.
    When you connect HANA, SAP Predictive Analysis uses PAL Algorithm which runs
    on HANA server.
    Coming to your question regarding difference,
    In SAP PA Native Algorithm, we could provide the Data variable, Algorithm
    picks the seasonal information from the Data column. Both R and SAP HANA PAL
    does not support Date Column. We need configure seasonal information in
    Algorithm properties.
    R Properties
    1) Period : you need to mention the periodicity of the Data
    Monthly : (12)
    Quarter : (4)
    Custom : you can use it for week or Daily or hourly.
    2) Start Year: need to mention Start year.
    Start year is not used by algorithm for calculating Time series, but it helps
    PA to generate Visualization ( Time series chart) by simulating year and
    periodicity information.
    3) Starting Period:
    if your data is Quarterly and you have data recordings from Q2, mention 2 in
    start period.
    Example.
    If the data periodicity is Monthy and my data starts from Feb 1979, we need to provide following information,
    Period: 12
    Start year: 1979
    start Period: 2
    PAL Properties. : Same as properties defined in R.
    Thanks
    Ashok
    [email protected]

  • Role of SAP security design consultant

    Hi All,
    what role does a  SAP HR (SAP Security Design) Consultant play?
    how different is it from a regular SAP HR?
    pls let me know
    regards,
    Pratik

    What i assume is you will have to understand different roles of users in that company who will need access to Hr system, and classify under catogories, set up roles and define authorisation profiles, set up structural authorisations based on clients requirements.
    as far as HR is concerned you need to understand different authorisation objects,roles, profiles available in standard SAP ystem and set up new ones add some additional privileges etc whereever required. get your self familiar with various HR authorisation Objects etc.
    Also lil bit of user management, reporting on Infoytpes, tracking changes, modiufication to business critical transactions etc.

  • Login Issues to SAP HANA from Design Studio

    Hello All ,
    I am pretty newbie here so the question might seem a bit elementary .We are trying to connect to our SAP HANA environment from Design studio .When we are pinging the XSEngine URL it is responding properly .But when we are  trying to connect from Design Studio the following error occurs .
    Here is the landscape we have designed so far :
    HANA Server is installed .
    BOXI 4.1 SP 5 server is installed .
    HANA and BO servers are communicating through SSL .
    Design Studio Add-on is installed on BO server .
    When we tried to explore the path using HANA explorer we found the path mentioned in the error screen does not exist .
    What we need to do to overcome this error ?
    Regards,
    Sourav Roy

    Hi Sourav,
    As mentioned by Karol, you have to deploy the DS-HANA Addon on your HANA System.
    How to Document : Getting Started on SAP Design Studio 1.3 powered by BW on HANA – Part 2
    Overview and Permissions : Design Studio 1.3 on HANA Platform
    Thanks,
    Poovin.

  • Transporting Data models in SAP HANA

    Hi All,
    I was trying to understand how data models can be transported in SAP HANA and came across the different options in the Export option of hana Studio like exporting with Delivery unit, Developer mode, etc.. Can someone please elaborate on the differences between each and the scenrio when each should be used?
    Thanks,
    Sam

    No Justin, haven't used in any "production" environments till now since the client am working for is still to be upgraded to SP7. At internal system we are doing some R&D in those lines and developing artifacts from "Development" perspective.
    Here i shared this link for reference itself.
    Also i think using Development perspective for modelling is what is being suggested going forward in blogs like this
    6 Golden Rules for New SAP HANA Developers
    May be John Appleby and his team are already using "Development" perspective consistently
    Regards,
    Krishna Tangudu

  • Transport SAP HANA/BWA Index to Target

    Dear Experts,
    I have created SAP HANA/BWA index using t-code RSDDB in DEV system using BEx query. now i want to transport the BWA index to TEST system, but I don't know how?
    Already I have transported the BEx query to TEST system.
    Please help me to how we will transport the SAP HANA/BWA index to TEST system.
    Thanks,
    Regards,
    Karuppiah N

    Hi Karuppiah,
    I am not sure it is possible to transport BWA Index
    Analytic Indexes are not integrated in the BW repository and therefore cannot be transported.
    "Analytic indexes can be created and filled with (transformed) data quickly. They are intended for ad-hoc scenarios. They can also be created as InfoProviders without reference to InfoObjects. They are therefore not integrated into the metadata repository and cannot be transported. "
    Reference: http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/209f3aec-09d9-3110-339c-c127409aee06?QuickLink=index&…
    Regards,
    V Srinivasan

  • SAP HANA runs based on Reat time processing or Batch Processing

    i) SAP HANA runs based on Reat time processing or Batch Processing or any other processing ?
    Kindly provide the answer ____________________
    ii) Which two new developement technologies led to growth of SAP HANA ?
    a.) Columnar Structure
    b.) Mulicore processing
    c.) RDBMS
    Kindly provide the anser _____________
    Thanks,
    Vigram.

    Hi Vigram,
    SAP hana runs based on real time, batch... etc,
    Also SAP HANA gets benefit of both columnar and Multicore processing.

  • Deploying / Transporting SAP HANA-ECC and BW Solution?

    Is there one centralized deployment guid for SAP  HANA solutions in the SAP ECC and SAP BW environment?
    Please can one of you provide some directions to a centralized guide to move / deploy SAP HANA solutions build in one system to Quality and/or production environment?

    Rama,
    there's a pretty straight forward export/import functionality for moveing models from dev to qa and prod. Please see the SAP HANA Modeling Guide, chapter "Processing Information Objects".
    You can also use CTS+ in conjunction with this functionality.
    Regards,
    Marc
    SAP Customer Solution Adoption (CSA)

  • Error adding Default roles after transport

    we are trying to transport our GP to another system.
    the transport imported successfully.
    but we are getting below error if we need to add the Default roles in the new portal system.
    Administartion -> Assign default roles ->
    Error/Message: The process template does not contain roles to which you can assign default values.
         Could not retrieve process template
    Design time:
         Error/Message: 1.Cannot retrieve activity template: Development object does not exist in the database
                        2.      Cannot retrieve object:
    Please suggest me if you have faced simiilar issue.

    Already I have tried in both from Administartion -> Assign Default Roles -> Select the process, by selecting the process & when we click on Open, this error is populated.
    Also in Design time -> select the process -> by clicking on the "Open" , I am getting this error.
    Also, I have tried the option to do "Edit All" to change the version of all the objects and transport them to targeted system.
    and , also implemented the SAP Note: 1321013.
    But in all the above cases, I am getting the same error.
    Then tried to remove the Default Roles and assigned all the roles to Initiator, then transported, still when I try to run the process and when I open the process getting the same error.
    SAP Note: 1321013:
    Terminate all process instances.
    Unlock all objects.
    Delete all process templates.
    Empty trash.
    Redeploy process templates.
    Release objects after Import.
    (http://wiki.sdn.sap.com/wiki/display/JSTSG/(GP)ICannotStartorOpenaProcess)

  • SAP HANA Modeling Sample end to end Project

    Dear All,
    I got trained on SAP HANA Modeling and BODS recently, and in process of learning SAP BO. My core is SAP EP and don't have indepth knowledge on SAP BI and Functional aspects.
    I am working on few sample applications but I would like to do a Sample end to end project on real time scenarios/POC's.
    Do you have any sample requirement documents that helps to do a end to end project(real time scenarios) . Does SAP store any piolet project requirement specs for public use which help to gain real time experience.
    Kindly help me on this.
    Thanks & Regards,
    Rajeev Bikkani

    hi look into this
    Role of a Functional Consultant in an End To End Implementation
    1. Functional consultant is expected to generate knowledge about the current business process, design current business flows, study current business processes and its complication, in all we can say getting through with current business setup. Flow diagrams and DFD are prepared, most of the time in Vision format, all this forms the part of AS IS document.
    2. Everything configured has to be documented as per their categories in the form of predefined templates, these have to be then approved by the team leads or who ever the consultant is reporting to.
    3. Mapping and GAP analysis is done for each module, I have seen people defining integration after mapping, gap analysis and configuration is done, but as per my experience in implementation, it is a simultaneous process.
    4. Before starting configuring  future business processes in SAP, the DFD/ERD are prepared, this documentation is called TO BE, which can be also siad as the result of mapping and gap analysis.
    5. Sometimes Functional consultants are also expected to prepare test scripts for testing the configured scenarios.
    6. End user manual and user training is also expected from F.Consultants.
    The project normally starts off  with a Kick off meeting in which the team size, team members, reporting system, responsibilities, duties, methodology, dates and schedules, working hours which have been predicided are formally defined.

  • SAP HANA modelling Standalone

    Hello Experts,
    We are in the process of HANA Standalone implementation and design studio as reporting tool. When I am modeling, I did not figure out answers to some of the below questions .Below are the questions. Experts, please help.
    Best way of modeling: The SAP HANA LIVE is completely built on calculation view; there are no Attribute and Analytical views. I have got different answer why there is only Calculation view and there are no Alaytic view and Attribute views. We are in SP7 latest version. This is a brand new HANA in top of non-SAP (DB2 source).  What is the best way to model this scenario, meaning, can we model everything in the Calculation view’s like SAP HANA live or do you suggest using the standard attribute, analytical and calculation views to do the data model. Is SAP moving away from AV & AT to only calculation Views to simply the modeling approach?
    Reporting: We are using the design studio as front end tool. Just for example, if we assume that we are
    Using the BW, we bring all the data in to BW from different sources, build the cubes and use the bex query. Here in bex query we will be using the restricted key figures, calculated key figures calculations etc. From the reporting wise, we have the same requirements, calculations, RKF, CKF,Sum, Avg etc. if we are Using the design studio on top of standalone HANA, where do I need to implement all these calculations? Is it in different views?  (From reporting perspective, if it’s BW system, I would have done all the calculations in BEx.)
    Universe: If we are doing all the calculations in SAP HANA like RKF. CKF and other calculations , what is the point in having additional layer of universe , because the reporting compnets cam access the queries directly on views .In one of our POC , we found that the using universe affect performance.
    Real time reporting: Our overall objective is to give a real time or close to real time reporting requirements, how data services can help, meaning I can schedule the data loads every 3 or 5 min to pull the data from source. If I am using the Data services, how soon I can get the data in HANA, I know it depends on the no of records and the transformations in between the systems & network speed. Assuming that I will schele the job every 2 min and it will take another 5 min to process the Data services job , is it fair to say the my information will be available on the BOBJ tools with in 10 min from the creation of the records.
    Are there any new ETL capabilities included in SP7, I see some additional features included in SP7. Is some of the concepts discussed are still valid, because in SP7 we have star join concept.
    Thanks
    Magge

    magge kris wrote:
    Hello Experts,
    We are in the process of HANA Standalone implementation and design studio as reporting tool. When I am modeling, I did not figure out answers to some of the below questions .Below are the questions. Experts, please help.
    Best way of modeling: The SAP HANA LIVE is completely built on calculation view; there are no Attribute and Analytical views. I have got different answer why there is only Calculation view and there are no Alaytic view and Attribute views. We are in SP7 latest version. This is a brand new HANA in top of non-SAP (DB2 source).  What is the best way to model this scenario, meaning, can we model everything in the Calculation view’s like SAP HANA live or do you suggest using the standard attribute, analytical and calculation views to do the data model. Is SAP moving away from AV & AT to only calculation Views to simply the modeling approach?
    >> I haven't read any "official" guidance to move away from typical modeling approach, so I'd say stick with the usual approach- AT, then AV, then CA views. I was told that the reason for different approach with HANA Live was to simplify development for mass production of solutions.
    Reporting: We are using the design studio as front end tool. Just for example, if we assume that we are
    Using the BW, we bring all the data in to BW from different sources, build the cubes and use the bex query. Here in bex query we will be using the restricted key figures, calculated key figures calculations etc. From the reporting wise, we have the same requirements, calculations, RKF, CKF,Sum, Avg etc. if we are Using the design studio on top of standalone HANA, where do I need to implement all these calculations? Is it in different views?  (From reporting perspective, if it’s BW system, I would have done all the calculations in BEx.)
    >> I'm not a BW guy, but from a HANA perspective - implement them where they make the most sense. In some cases, this is obvious - restricted columns are only available in Analytic Views. Hard to provide more complex advice here - it depends on your scenario(s). Review your training materials, review SCN posts and you should start to develop a better idea of where to model particular requirements. (Most of the time in typical BI scenarios, requirements map nicely to straightforward modeling approaches such as Attribute/Analytic/Calculations Views. However, some situations such as slowly-changing dimensions, certain kinds of calculations (i.e. calc before aggregation with BODS as source - where calculation should be done in ETL logic) etc can be more complex. If you have specific scenarios that you're unsure about, post them here on SCN.
    Universe: If we are doing all the calculations in SAP HANA like RKF. CKF and other calculations , what is the point in having additional layer of universe , because the reporting compnets cam access the queries directly on views .In one of our POC , we found that the using universe affect performance.
    >>> Depends on what you're doing. Universe generates SQL just like front-end tools, so bad performance implies bad modeling. Generally speaking - universes *can* create more autonomous reporting architecture. But if your scenario doesn't require it - then by all means, avoid the additional layer if there's no added value.
    Real time reporting: Our overall objective is to give a real time or close to real time reporting requirements, how data services can help, meaning I can schedule the data loads every 3 or 5 min to pull the data from source. If I am using the Data services, how soon I can get the data in HANA, I know it depends on the no of records and the transformations in between the systems & network speed. Assuming that I will schele the job every 2 min and it will take another 5 min to process the Data services job , is it fair to say the my information will be available on the BOBJ tools with in 10 min from the creation of the records.
    Are there any new ETL capabilities included in SP7, I see some additional features included in SP7. Is some of the concepts discussed are still valid, because in SP7 we have star join concept.
    >>> Not exactly sure what your question here is. Your limits on BODS are the same as with any other target system - doesn't depend on HANA. The second the record(s) are committed to HANA, they are available. They may be in delta storage, but they're available. You just need to work out how often to schedule BODS - and if your jobs are taking 5 minutes to run, but you're scheduling executions every 2 minutes, you're going to run into problems...
    Thanks
    Magge

  • Incorrect results with design-time artifacts

    Hi,
    Problem description:
    After creating design-time artifacts, Analytic view with Anlaytic privilege having assign restriction as repository procedure is not returning desired results.
    HANA AWS revision 70.
    Problem Recreation:
    Below I am giving the process for problem recreation.
    1. EDW.hdbschema:
    schema_name="EDW";
    2. EDW_DD.hdbdd:
    namespace excent.P1;
    @Schema : 'EDW'
    context EDW_DD {
    type TT_LV {
    LowValue : String(18);
    entity USER {
    @Catalog.tableType: #COLUMN
    key User              : String(10);
    key InfoObject        : String(10);
    key Option            : String(2);
    key LowValue          : String(18);
    HighValue             : String(18);
    ChangedOn             : LocalDate;
    ChangedBy             : String(8);
    3. Created Analytic view (AN_EDW_SALES.analyticview) based on VBAP table with output as VBELN, MATNR, NETWR. Apply privileges is left empty.
    4. Created repository procedure (LowValue_IN.procedure)
    CREATE PROCEDURE LowValue_IN (OUT RES "EDW"."excent.P1::EDW_DD.TT_LV" )
           LANGUAGE SQLSCRIPT
           SQL SECURITY DEFINER
           DEFAULT SCHEMA _SYS_BIC
           READS SQL DATA AS
    v_cnt INT;
    BEGIN
    RES = SELECT "LowValue" FROM  "EDW"."excent.P1::EDW_DD.USER"
                 WHERE "User"            = SESSION_USER
                   AND "InfoObject"       = 'MATNR'
                   AND "Option"           = 'EQ'
                 GROUP BY "LowValue",  "User"
    SELECT COUNT(*) INTO v_cnt FROM :RES;
    IF :v_cnt = 0
    THEN RES = SELECT 'EMPTY' AS "LowValue" FROM DUMMY;
    END IF ;
    END;
    5. Created Analytic privilege(AP_EDW_SALES.analytic privilege) with assign restriction on Repository procedure (LowValue_IN.procedure) with operator "IN" and Privilege Validity with operator as ">=", From date as yesterday (2014-03-24)
    6. Created Role (EDW_SALES.hdbrole) and assigned the below privileges:
    system privilege: CATALOG READ;
    catalog sql object "SYS"."REPOSITORY_REST": EXECUTE;
    catalog schema "_SYS_BI": SELECT;
    catalog schema "_SYS_BIC": SELECT;
    analytic privilege: excent.P1:AP_EDW_SALES.analyticprivilege;
    package excent.P1: REPO.READ;
    7. Created user from Systems view (EDWUSER) and assigned the role (EDW_SALES.hdbrole).
    8. In SQLConsole, executed the query to insert 1 record, which has in VBAP:
    insert into "EDW"."excent.P1::EDW_DD.USER" values('EDWSALES','MATNR','EQ','M-08','','','');
    Now when I do the data preview, I can see all the records instead of only 1 record with MATNR "M-08".
    Few Observations:
    1. When SELECT statement of procedure when executed in SQLconsole for EDWUSER(with extra privilege SELECT on EDW schema) it returns only 1 row but not the Analytic view.
    2. Also the repository procedure can be used as catalog procedure in Analytic privilege. This also behaving the same.
    3. When I change the "Apply Privileges" option to "Analytic Privileges" in Analytic view, I am getting below error when doing data preview.
    I could not understand where I am doing wrong. Is it a product bug or mistake from my end?
    Regards
    Raj

    Ohh! Yes.. Got it now.
    Well I tried to recreate your scenario: ( Tested on Rev 70)
    1) Am sure you might have used the correct insert statement:
    insert into "EDW"."MarchBatch.tst::EDW_DD.USER" values('EDWUSER','MATNR','EQ','1400-310','','','');
    The user name must be EDWUSER not EDWSALES.
    2) Have created Analytic Privilege with Validity date and others as mentioned by you. Only difference is am using a Repository procedure created under Content folder using Modeller Perspective (PROC_TEST) as shown in the below screenshot.
    3) The data preview for EDWUSER as shown below is showing expected results i.e MTNR = 1400-310 is only showing up.
    4) When I used the .procedure (LowValue_IN) , I also hit the same error same as you mentioned below:
    With only difference being the way the procedure is created, the issue must be in the way .procedure is used. May be a bug I guess
    Regards,
    Krishna Tangudu

Maybe you are looking for