COUNT DISTINCT in SAP Hana

I am not getting the results that what i am looking for when i try to use COUNT DISTINCT function. I did able to write the Calc View using the COUNT (DISTINCT <column_name1> || <column_name2>) but when i tried to look at the explorer it doesn't give me the right results. On the other side, the same system i have in MSAS 2000 and MSAS 2008, they are giving the right results while porting the same query in CalcView for SAP Hana, the results doesn't tally. Any suggestions or experience to share. Look at the side below;
http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/401eb46e-d407-2f10-c999-e467d93eae50?QuickLink=index&overridelayout=true&53244709744350
Other Examples
u2013 Exception Aggregation (e.g. Distinct Count only available as BWA Calculation Engine feature)
http://www.saptechno.com/sap-notes.html?view=sapnote&id=1631919

I believe you are more likely to get a response in the [In-Memory Business Data Management|SAP HANA and In-Memory Computing; forum.
I'd move your post there, but I don't have the rights for this.
Please remeber to close this thread.
Thank you for your understanding,
- Ludek

Similar Messages

  • SAP HANA - How system understands the columnar data relation?

    Hi All,
    SAP HANA database stores the data in columnar format. So from row based table perspective, each column will be treated as an individual table with distinct values.
    How HANA database maintains the relationship of each row so the user gets correct data? In other words, how HANA database internally process the column based records?
    Regards,
    Mandar

    Lars,  Can you please help me to understand how this inverse index works here?
    Mandar,
    Even determination of the correct rows,  can be linked to these three compression techniques.
    Run-length encoding
    Cluster encoding
    and Dictionary encoding
    Run-length encoding. If data in a column is sorted there is a high probability that two or more
    elements contain the same values. Run-length encoding counts the number of consecutive column
    elements with the same values. To achieve this, the original column is replaced with a two-column
    list. The first column contains the values as they appear in the original table column and the second
    column contains the counts of consecutive occurrences of the respective value. From this information
    the original column can easily be reconstructed.
    Cluster encoding. This compression technique works by searching for multiple occurrences of the
    same sequence of values within the original column. The compressed column consists of a two-column
    list with the first column containing the elements of a particular sequence and the second
    column containing the row numbers where the sequence starts in the original column. Many popular
    data compression programs use this technique to compress text files.
    Dictionary encoding. Table columns which contain only a comparably small number of distinct values
    can be effectively be compressed by enumerating the distinct values and storing only their numbers.
    This technique requires that an additional table, the dictionary, is maintained which in the first column
    contains the original values and in the second one the numbers representing the values. This
    technique leads to high compression rates, is very common, e. g. in country codes or customer numbers,
    but is seldom regarded as a compression technique.
    thanks,
    Tilak

  • SAP HANA D3 Library errors - "queue" is not a function

    Hello all,
    i have a question regarding D3 Integration into SAP HANA and hope one can help me.
    The last days i developed a D3 Choropleth (spatial data) with hover effects, tooltips and a legend. I have an existing SAPUI5 and HANA XS application running and now i wanted to integrate my D3 Choropleth.
    1 ) First of all, i added the following tags into the head section of the index.html file of my SAPUI5 project as i need those libaries.
    <script type="text/javascript" src="http://d3js.org/d3.v3.min.js"></script>
    <script type="text/javascript" src="http://d3js.org/queue.v1.min.js"></script>
    <script type="text/javascript" src="http://d3js.org/topojson.v1.min.js"></script>
    2) Then, i added the program code of my D3 Choropleth into the view.js of my SAPUI5 project:
    var html2 = new sap.ui.core.HTML("d3choropleth", {
                    content: "<div class='D3Choropleth'>" + "</div>",
                    preferDOM: false,
                    afterRendering: function() {
                                  .... here is my code ...
    I have a SAPUI5 shell with different NavigationItems (tabs) and for this D3 Choropleth tab i am writing:
    case "WI_choropleth":
    oShell.setContent(html2);
    break;
    When i start my SAPUI5 project and klick on the tab where the D3 Choropleth should be rendered, i get the following errors:
    d3.scale.threshold is not a function
    d3.geo.albers center is not a function
    queue is not a function
    Moreover, i have to say that my D3 Choropleth works standalone outside HANA very well, that is why i assume it is a library integration issue.
    In my browsers (firefox) developer console i can see that there is a D3 library loaded by default (as it is one of SAPUI5s components) with the path: sap/ui5/1/resources/sap/ui/thirdparty/D3.js.
    BUT this is a really old version of D3 (2.9), the current release is D3 (3.4), so maybe the problem is that the D3 libary that is loaded by default overwrites my integration of D3 (script tag above)?
    Does anyone have the same issues and knows how to solve that? Furthermore it seems that the d3 queue library is also not integrated as the error "queue is not a function" occurs and i also ask me how to solve that error.
    We have SAP HANA Developer Edition Revision 80 (by AWS), HANA Studio and Client are on revision 73 (64bit).
    It would be great if anyone could help me with my issue.
    Further question: Is the current release of D3 going to be integrated into the next HANA revision?
    Thanks a lot & regards,
    Andreas

    Hi Andreas,
    You are right. The way you integrate the libraries in your code is not the way that will work for HANA XS projects. This is why it doesn't accept the queue function (library is not loaded) and also why the 3.x only D3JS funcitons are not accepted (only the internal D3JS library is loading and these functions were not a part of 2.9 yet).
    To integrate third party libraries, you need to add XSJSLIB files to your project. These need to pass the server-side JSLint checks before accepted by the XS engine. (Client-side checks that fail might not count though).
    Please see this post from David Brookler for more information.
    http://scn.sap.com/community/developer-center/hana/blog/2013/12/23/db001-using-libraries-in-xs
    Best regards,
    Tobias

  • Open HUB ( SAP BW ) to SAP HANA through DB Connection data loading , Delete data from table option is not working Please help any one from this forum

    Issue:
    I have SAP BW system and SAP HANA System
    SAP BW to SAP HANA connecting through a DB Connection (named HANA)
    Whenever I created any Open Hub as Destination like DB Table with the help of DB Connection, table will be created at HANA Schema level ( L_F50800_D )
    Executed the Open Hub service without checking DELETING Data from table option
    Data loaded with 16 Records from BW to HANA same
    Second time again executed from BW to HANA now 32 records came ( it is going to append )
    Executed the Open Hub service with checking DELETING Data from table option
    Now am getting short Dump DBIF_RSQL_TABLE_KNOWN getting
    If checking in SAP BW system tio SAP BW system it is working fine ..
    will this option supports through DB Connection or not ?
    Please follow the attachemnet along with this discussion and help me to resolve how ?
    From
    Santhosh Kumar

    Hi Ramanjaneyulu ,
    First of all thanks for the reply ,
    Here the issue is At OH level ( Definition Level - DESTINATION TAB and FIELD DEFINITION )
    in that there is check box i have selected already that is what my issue even though selected also
    not performing the deletion from target level .
    SAP BW - to SAP HANA via DBC connection
    1. first time from BW suppose 16 records - Dtp Executed -loaded up to HANA - 16 same
    2. second time again executed from BW - now hana side appaended means 16+16 = 32
    3. so that i used to select the check box at OH level like Deleting data from table
    4. Now excuted the DTP it throws an Short Dump - DBIF_RSQL_TABLE_KNOWN
    Now please tell me how to resolve this ? will this option is applicable for HANA mean to say like , deleting data from table option ...
    Thanks
    Santhosh Kumar

  • How to display the count distinct in a report

    hi,
    i have a report with multiple columns in it and with column, say A; i need to display in a calculated column B how many distinct values there are in A across the entire report; how to do that?

    Hi.
    For example:
    CALENDAR_YEAR
    CALENDAR_MONTH_DESC
    count(distinct TIMES.CALENDAR_MONTH_DESC by TIMES.CALENDAR_YEAR)
    Count will give you how many distinct months are in year.
    Regards
    Goran
    http://108obiee.blogspot.com

  • Currency conversion error in SAP HANA

    Hi,
    I am new to SAP HANA and learning to create information views in HANA studio (SAP HANA SP6 on Cloudshare, HANA studio 1.0.68). I am trying to create a simple analytic view (on purchaseOrderItem table in SAP_HANA_EPM_DEMO sample database) to have GrossAmount converted to EUR.
    I added a calculated column as follows:
    When i click on "OK", i get error -
    The check box “Calculate before aggregation” has been unchecked, because the definition of the calculated column contains measures with currency conversion, restricted measures or operands with input parameters. For such a calculated column the calculation is always done after the aggregation."
    and checkbox "calculate before aggregation" get unchecked. See screenshot below:
    Please suggest what could be reason? Thanks in advance.
    Regards,
    Amit

    Hi Amit,
    If you uncheck the "Calculate before aggregation" checkbox and activate the view, you will see in the generated log that a Calc scenario is created. (a view with /olap wrapper). Due to the calc scenario, the aggregation is defined as the default behavior for the KFs and hence the calculation cannot be done before aggregation.
    By the way, I did not understand why do you need calculate before aggregation for a KF which is just a copy of another KF. If you need Gross amount in Local currency and EUR, then just perform the currency conversion without "Calculate before aggregation" checkbox. It will work.
    Regards,
    Ravi

  • SAP HANA modelling Standalone

    Hello Experts,
    We are in the process of HANA Standalone implementation and design studio as reporting tool. When I am modeling, I did not figure out answers to some of the below questions .Below are the questions. Experts, please help.
    Best way of modeling: The SAP HANA LIVE is completely built on calculation view; there are no Attribute and Analytical views. I have got different answer why there is only Calculation view and there are no Alaytic view and Attribute views. We are in SP7 latest version. This is a brand new HANA in top of non-SAP (DB2 source).  What is the best way to model this scenario, meaning, can we model everything in the Calculation view’s like SAP HANA live or do you suggest using the standard attribute, analytical and calculation views to do the data model. Is SAP moving away from AV & AT to only calculation Views to simply the modeling approach?
    Reporting: We are using the design studio as front end tool. Just for example, if we assume that we are
    Using the BW, we bring all the data in to BW from different sources, build the cubes and use the bex query. Here in bex query we will be using the restricted key figures, calculated key figures calculations etc. From the reporting wise, we have the same requirements, calculations, RKF, CKF,Sum, Avg etc. if we are Using the design studio on top of standalone HANA, where do I need to implement all these calculations? Is it in different views?  (From reporting perspective, if it’s BW system, I would have done all the calculations in BEx.)
    Universe: If we are doing all the calculations in SAP HANA like RKF. CKF and other calculations , what is the point in having additional layer of universe , because the reporting compnets cam access the queries directly on views .In one of our POC , we found that the using universe affect performance.
    Real time reporting: Our overall objective is to give a real time or close to real time reporting requirements, how data services can help, meaning I can schedule the data loads every 3 or 5 min to pull the data from source. If I am using the Data services, how soon I can get the data in HANA, I know it depends on the no of records and the transformations in between the systems & network speed. Assuming that I will schele the job every 2 min and it will take another 5 min to process the Data services job , is it fair to say the my information will be available on the BOBJ tools with in 10 min from the creation of the records.
    Are there any new ETL capabilities included in SP7, I see some additional features included in SP7. Is some of the concepts discussed are still valid, because in SP7 we have star join concept.
    Thanks
    Magge

    magge kris wrote:
    Hello Experts,
    We are in the process of HANA Standalone implementation and design studio as reporting tool. When I am modeling, I did not figure out answers to some of the below questions .Below are the questions. Experts, please help.
    Best way of modeling: The SAP HANA LIVE is completely built on calculation view; there are no Attribute and Analytical views. I have got different answer why there is only Calculation view and there are no Alaytic view and Attribute views. We are in SP7 latest version. This is a brand new HANA in top of non-SAP (DB2 source).  What is the best way to model this scenario, meaning, can we model everything in the Calculation view’s like SAP HANA live or do you suggest using the standard attribute, analytical and calculation views to do the data model. Is SAP moving away from AV & AT to only calculation Views to simply the modeling approach?
    >> I haven't read any "official" guidance to move away from typical modeling approach, so I'd say stick with the usual approach- AT, then AV, then CA views. I was told that the reason for different approach with HANA Live was to simplify development for mass production of solutions.
    Reporting: We are using the design studio as front end tool. Just for example, if we assume that we are
    Using the BW, we bring all the data in to BW from different sources, build the cubes and use the bex query. Here in bex query we will be using the restricted key figures, calculated key figures calculations etc. From the reporting wise, we have the same requirements, calculations, RKF, CKF,Sum, Avg etc. if we are Using the design studio on top of standalone HANA, where do I need to implement all these calculations? Is it in different views?  (From reporting perspective, if it’s BW system, I would have done all the calculations in BEx.)
    >> I'm not a BW guy, but from a HANA perspective - implement them where they make the most sense. In some cases, this is obvious - restricted columns are only available in Analytic Views. Hard to provide more complex advice here - it depends on your scenario(s). Review your training materials, review SCN posts and you should start to develop a better idea of where to model particular requirements. (Most of the time in typical BI scenarios, requirements map nicely to straightforward modeling approaches such as Attribute/Analytic/Calculations Views. However, some situations such as slowly-changing dimensions, certain kinds of calculations (i.e. calc before aggregation with BODS as source - where calculation should be done in ETL logic) etc can be more complex. If you have specific scenarios that you're unsure about, post them here on SCN.
    Universe: If we are doing all the calculations in SAP HANA like RKF. CKF and other calculations , what is the point in having additional layer of universe , because the reporting compnets cam access the queries directly on views .In one of our POC , we found that the using universe affect performance.
    >>> Depends on what you're doing. Universe generates SQL just like front-end tools, so bad performance implies bad modeling. Generally speaking - universes *can* create more autonomous reporting architecture. But if your scenario doesn't require it - then by all means, avoid the additional layer if there's no added value.
    Real time reporting: Our overall objective is to give a real time or close to real time reporting requirements, how data services can help, meaning I can schedule the data loads every 3 or 5 min to pull the data from source. If I am using the Data services, how soon I can get the data in HANA, I know it depends on the no of records and the transformations in between the systems & network speed. Assuming that I will schele the job every 2 min and it will take another 5 min to process the Data services job , is it fair to say the my information will be available on the BOBJ tools with in 10 min from the creation of the records.
    Are there any new ETL capabilities included in SP7, I see some additional features included in SP7. Is some of the concepts discussed are still valid, because in SP7 we have star join concept.
    >>> Not exactly sure what your question here is. Your limits on BODS are the same as with any other target system - doesn't depend on HANA. The second the record(s) are committed to HANA, they are available. They may be in delta storage, but they're available. You just need to work out how often to schedule BODS - and if your jobs are taking 5 minutes to run, but you're scheduling executions every 2 minutes, you're going to run into problems...
    Thanks
    Magge

  • SAP HANA XSODATA service (Service exception: column store error.)

    Hi all,
    i have a problem with my calculation view using xsodata service on it. (There's an input parameter called P
    _SWERK)
    In my calculation view, the data origin are two analytic views (on which the input parameter P_SWERK should be filter data at beginning of the sql script code).
    First i read the analytic views with function CE_OLAP_VIEW and after i do a CE_PROJECTION function on them using the input parameter P_SWERK like a filter on field SWERK.
    But when i run my application on browser the following error occurs :
    <message xml:lang="en-US">Service exception: column store error.</message>
    The link is this :
    http://host:port/Project_DM/services/Test/TEST_ZIIG_PDM_CALC_VIEW_FINAL_service.xsodata/PianiDiManutenzioneParameters(P_SWERK='CO05')/Results
    The service definition is :
    service {
    "EricssonItalgas/TEST_ZIIG_PDM_VIEW_FINAL.calculationview" as "PianiDiManutenzione" keys generate local "ID"
    parameters via entity;
    The SAP HANA AWS revision is 60.
    Someone could you help me,please?
    Thanks in advance.
    Dario.

    Hi Dario,
    Does the calculation view work without xsodata service? From the URL, your XS project name should be Project_DM, but from the xsodata source, the project name is EricssonItalgas. I'm confused with this. Did you use rewrite_rules or?
    Best regards,
    Wenjun

  • Universe based on SAP HANA OLAP connection

    Can I create universe based on SAP HANA OLAP connection ?
    How can i access SAP HANA view which was created using OLAP connection in WEBI and Dashboard? It does not allow creating Universe on top of SAP HANA OLAP connection and give an error.
    SAP BusinessObjects query and reporting applications can directly connect to OLAP SAP HANA connections. No universe is required, only a published OLAP SAP HANA connection.
    Thanks

    OLAP connection are only meant for tools that make use of OLAP connectivity only like BEx ..and HANA analytic views and calculated views are also treated as OLAP cubes for them..
    So these connection is only available with analysis product like Analysis Office (which is meant to be used with Bex or OLAP cubes ) and Design studio which is again mainly built around OLAP paradigm..
    You can not create a universe or Webi on Olap connection on HANA as of now.. However it could be like exposed in a future release of BO which is not public yet...

  • Issues while creating new user in SAP HANA

    Hello Team
                       When i am trying to create a new user in SAP HANA studio i can see that there has been a new field added of DATA validity where there are two options  a) Valid From and b)Valid Unitl . No matter whatever dates i give in this i got this error which creating the user . Status :- inactive
    Reason :- outside validity period . PFA screen shot attached below . Please suggest what dates should be given in  this field with some sample example .
    Regards

    Prag,
    Try this. Execute the following in a SQL window started by a userid that has been granted the USER ADMIN system privilege:
    ALTER USER BODS1 VALID FROM NOW    UNTIL FOREVER;
    You can use a date instead of FOREVER --- '2016-12-31 23:59'.
    Good luck,
    Robert

  • Performance problem with more than one COUNT(DISTINCT ...) in a query

    Hi,
    (I hope this is the good forum).
    In the following query, I have 2 Count Distinct on 2 different fields of the same table.  Execution time is okay (2 s) with one or the other COUNT(DISCTINCT ...) in the SELECT clause, but is not tolerable (12 s) with both together in the query! I have
    a similar case with 3 counts: 4 s each, 36 s when together!
    I've looked at the execution plan, and it seems that with two count distinct, SQL server sorts the table twice before joining the results.
    I do not have much experience with SQL server optimization, and I don't know what to improve and how. The SQL is generated by Business Objects, I have few possibilities to tune it. The most direct way would be to execute 2 different queries, but I'd like
    to avoid it.
    Any advice?
    SELECT
      DIM_MOIS.DATE_DEBUT_MOIS,
      DIM_MOIS.NUM_ANNEE_MOIS,
      DIM_DEMANDE_SCD.CAT_DEMANDE,
      DIM_APPLICATION.LIB_APPLICATION,
      DIM_DEMANDE_SCD.CAT_DEMANDE ,
      count(distinct FAITS_DEMANDE.NB_DEMANDE_FLUX),
      count(distinct FAITS_DEMANDE.NB_DEMANDE_RESOL_NIV1)
    FROM
      ALIM_SID.DIM_MOIS INNER JOIN ALIM_SID.DIM_JOUR ON (DIM_JOUR.SEQ_MOIS=DIM_MOIS.SEQ_MOIS)
       INNER JOIN ALIM_SID.FAITS_DEMANDE ON (FAITS_DEMANDE.SEQ_JOUR=DIM_JOUR.SEQ_JOUR)
       INNER JOIN ALIM_SID.DIM_APPLICATION ON (FAITS_DEMANDE.SEQ_APPLICATION=DIM_APPLICATION.SEQ_APPLICATION)
       INNER JOIN ALIM_SID.DIM_DEMANDE_SCD ON (FAITS_DEMANDE.SEQ_DEMANDE_SCD=DIM_DEMANDE_SCD.SEQ_DEMANDE_SCD)
    WHERE
      ( ( DIM_MOIS.NUM_ANNEE_MOIS ) >201301
    GROUP BY
      DIM_MOIS.DATE_DEBUT_MOIS,
      DIM_MOIS.NUM_ANNEE_MOIS,
      DIM_DEMANDE_SCD.CAT_DEMANDE,
      DIM_APPLICATION.LIB_APPLICATION

    Here is the script, nothing original. Hope this helps.
    -- Fact table :
    -- foreign keys begin by FK_,
    -- measures to counted (COUNT DISTINCT) begin with NB_
    CREATE TABLE [ALIM_SID].[FAITS_DEMANDE](
        [SEQ_JOUR] [int] NOT NULL,
        [SEQ_DEMANDE] [int] NOT NULL,
        [SEQ_DEMANDE_SCD] [int] NOT NULL,
        [SEQ_APPLICATION] [int] NOT NULL,
        [SEQ_INTERVENANT] [int] NOT NULL,
        [SEQ_SERVICE_RESPONSABLE] [int] NOT NULL,
        [NB_DEMANDE_FLUX] [int] NULL,
        [NB_DEMANDE_STOCK] [int] NULL,
        [NB_DEMANDE_RESOLUE] [int] NULL,
        [NB_DEMANDE_LIVREE] [int] NULL,
        [NB_DEMANDE_MEP] [int] NULL,
        [NB_DEMANDE_RESOL_NIV1] [int] NULL,
     CONSTRAINT [PK_FAITS_DEMANDE] PRIMARY KEY CLUSTERED
        [SEQ_JOUR] ASC,
        [SEQ_DEMANDE] ASC,
        [SEQ_DEMANDE_SCD] ASC,
        [SEQ_APPLICATION] ASC,
        [SEQ_INTERVENANT] ASC,
        [SEQ_SERVICE_RESPONSABLE] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY],
     CONSTRAINT [AK_AK_FAITS_DEMANDE_FAITS_DE] UNIQUE NONCLUSTERED
        [SEQ_JOUR] ASC,
        [SEQ_DEMANDE] ASC,
        [SEQ_DEMANDE_SCD] ASC,
        [SEQ_APPLICATION] ASC,
        [SEQ_INTERVENANT] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    ALTER TABLE [ALIM_SID].[FAITS_DEMANDE]  WITH CHECK ADD  CONSTRAINT [FK_FAITS_DEMANDE_DIM_APPLICATION] FOREIGN KEY([SEQ_APPLICATION])
    REFERENCES [ALIM_SID].[DIM_APPLICATION] ([SEQ_APPLICATION])
    GO
    ALTER TABLE [ALIM_SID].[FAITS_DEMANDE] CHECK CONSTRAINT [FK_FAITS_DEMANDE_DIM_APPLICATION]
    GO
    ALTER TABLE [ALIM_SID].[FAITS_DEMANDE]  WITH CHECK ADD  CONSTRAINT [FK_FAITS_DEMANDE_DIM_DEMANDE] FOREIGN KEY([SEQ_DEMANDE])
    REFERENCES [ALIM_SID].[DIM_DEMANDE] ([SEQ_DEMANDE])
    GO
    ALTER TABLE [ALIM_SID].[FAITS_DEMANDE] CHECK CONSTRAINT [FK_FAITS_DEMANDE_DIM_DEMANDE]
    GO
    ALTER TABLE [ALIM_SID].[FAITS_DEMANDE]  WITH CHECK ADD  CONSTRAINT [FK_FAITS_DEMANDE_DIM_DEMANDE_SCD] FOREIGN KEY([SEQ_DEMANDE_SCD])
    REFERENCES [ALIM_SID].[DIM_DEMANDE_SCD] ([SEQ_DEMANDE_SCD])
    GO
    ALTER TABLE [ALIM_SID].[FAITS_DEMANDE] CHECK CONSTRAINT [FK_FAITS_DEMANDE_DIM_DEMANDE_SCD]
    GO
    ALTER TABLE [ALIM_SID].[FAITS_DEMANDE]  WITH CHECK ADD  CONSTRAINT [FK_FAITS_DEMANDE_DIM_INTERVENANT] FOREIGN KEY([SEQ_INTERVENANT])
    REFERENCES [ALIM_SID].[DIM_INTERVENANT] ([SEQ_INTERVENANT])
    GO
    ALTER TABLE [ALIM_SID].[FAITS_DEMANDE] CHECK CONSTRAINT [FK_FAITS_DEMANDE_DIM_INTERVENANT]
    GO
    ALTER TABLE [ALIM_SID].[FAITS_DEMANDE]  WITH CHECK ADD  CONSTRAINT [FK_FAITS_DEMANDE_DIM_JOUR] FOREIGN KEY([SEQ_JOUR])
    REFERENCES [ALIM_SID].[DIM_JOUR] ([SEQ_JOUR])
    GO
    ALTER TABLE [ALIM_SID].[FAITS_DEMANDE] CHECK CONSTRAINT [FK_FAITS_DEMANDE_DIM_JOUR]
    GO
    ALTER TABLE [ALIM_SID].[FAITS_DEMANDE]  WITH CHECK ADD  CONSTRAINT [FK_FAITS_DEMANDE_DIM_SERVICE_RESPONSABLE] FOREIGN KEY([SEQ_SERVICE_RESPONSABLE])
    REFERENCES [ALIM_SID].[DIM_SERVICE] ([SEQ_SERVICE])
    GO
    ALTER TABLE [ALIM_SID].[FAITS_DEMANDE] CHECK CONSTRAINT [FK_FAITS_DEMANDE_DIM_SERVICE_RESPONSABLE]
    GO
    -- not shown : extended properties
    -- One of the dimension  tables (they all have a primary key named SEQ_)
    CREATE TABLE [ALIM_SID].[DIM_JOUR](
        [SEQ_JOUR] [int] IDENTITY(1,1) NOT NULL,
        [SEQ_ANNEE] [int] NOT NULL,
        [SEQ_MOIS] [int] NOT NULL,
        [DATE_JOUR] [date] NULL,
        [CODE_ANNEE] [varchar](25) NULL,
        [CODE_MOIS] [varchar](25) NULL,
        [CODE_SEMAINE_ISO] [varchar](25) NULL,
        [CODE_JOUR_ANNEE] [varchar](25) NULL,
        [CODE_ANNEE_JOUR] [varchar](25) NULL,
        [LIB_JOUR] [varchar](25) NULL,
        [LIB_JOUR_COURT] [varchar](25) NULL,
        [JOUR_OUVRE] [tinyint] NULL,
        [JOUR_CHOME] [tinyint] NULL,
     CONSTRAINT [PK_DIM_JOUR] PRIMARY KEY CLUSTERED
        [SEQ_JOUR] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    ALTER TABLE [ALIM_SID].[DIM_JOUR]  WITH CHECK ADD  CONSTRAINT [FK_DIM_JOUR_DIM_ANNEE] FOREIGN KEY([SEQ_ANNEE])
    REFERENCES [ALIM_SID].[DIM_ANNEE] ([SEQ_ANNEE])
    GO
    ALTER TABLE [ALIM_SID].[DIM_JOUR] CHECK CONSTRAINT [FK_DIM_JOUR_DIM_ANNEE]
    GO
    ALTER TABLE [ALIM_SID].[DIM_JOUR]  WITH CHECK ADD  CONSTRAINT [FK_DIM_JOUR_DIM_MOIS] FOREIGN KEY([SEQ_MOIS])
    REFERENCES [ALIM_SID].[DIM_MOIS] ([SEQ_MOIS])
    GO
    ALTER TABLE [ALIM_SID].[DIM_JOUR] CHECK CONSTRAINT [FK_DIM_JOUR_DIM_MOIS]
    GO

  • How do I get access to Mobile documents admin console login with SAP HANA Cloud cockpit?

    Hi All,
    I am trying to launch SAP Mobile Documents from my trail version of SAP HANA Cloud.
    The Help link which I had followed  via link:
    https://help.hana.ondemand.com/help/frameset.htm?dc618538d97610148155d97dcd123c24.html#concept_0B49F10346C94249845EC16364FFF66D_76
    In SAP Cloud HANA Cockpit Using the link available in Authorization>Authorization Management
    I had created an user assigned to a Group, and Group has been assigned to a  Role ODP-OPERATOR,
    Next step, when I am trying to assign a token, I could not able find a token. How to get a token from here?
    My intention of creating user, group, role and token is for getting access to Mobile documents admin console.
    However I have also tried to access the mobile documents admin console via link
    https://smd-p1886950994trial.hana.ondemand.com/mcm/admin
    When I browse the above link, It shows HTTP 503 the requested service is not currently available.
    I am not sure, How to get access Mobile documents Admin page from here
    Could someone clarify on this please?
    Regards,
    Saravanan.R

    Hi,
    I have managed find an answer from this link SAP Mobile Documents on SAP HANA Cloud saying, SAP Mobile documents trial is not available in SAP HANA Cloud cockpit.
    Regards,
    Saravanan.R

  • What are the different types of analytic techniques possible in SAP HANA with the examples?

    Hello Gurus,
    Please provide the information on what are the different types of Analytic techniques possible in SAP HANA with examples.
    I would want to know in category of Predictive analysis ,Advance statistical analysis ,segmentation analysis ,data reduction techniques and forecast techniques
    Which Analytic techniques are possible in SAP HANA?
    Thanks and Regards
    Sushma C Narasimhamurthy

    Hi Sushma,
    You can download the user guide here:
    http://www.google.com.au/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&ved=0CFcQFjAB&url=http%3A%2F%2Fhelp.sap.com%2Fbusinessobject%2Fproduct_guides%2FSBOpa10%2Fen%2Fpa_user_en.pdf&ei=NMgHUOOtIcSziQfqupyeBA&usg=AFQjCNG10eovyZvNOJneT-l6J7fk0KMQ1Q&sig2=l56CSxtyr_heE1WlhfTdZQ
    It has a list of the algorithms, which are pretty disappointing, I must say. No Random Forests? No ensembling methods? Given that it's using R algorithms, I must say this is a missed opportunity to beat products like SPSS and SAS at their own game. If SAP were to include this functionality, they would be the only BI vendor capable of having a serious predictive tool integrated with the rest of the platform.... but this looks pretty weak.
    I can only hope a later release will remedy this - or maybe the SDK will allow me to create what I need.
    As things stand, I could built a random forest using this tool, but I would have to use a lot of hardcoded SQL to make it happen. And if I wanted to go down that road, I could use the algorithms that come with the Microsoft/Oracle software.
    Please let me be wrong........

  • How to change the unload priority of a table in SAP HANA?

    Hi Experts,
    How we can change the unload priority of a table in SAP HANA? I know by default the priority is 5. Is there any way so that we can check the unload priority of a particular table in HANA studio? Is there any SQL statement to get the same?
    Please suggest.
    Thanks in advance.
    Regards,
    Arindam

    Hello Arindam,
    Just for the future:
    ALTER TABLE - SAP HANA SQL and System Views Reference - SAP Library
    To check before hand:
    select
    table_name, unload_priority from SYS.TABLES
    where table_name = '<Your Table>'
    To Make the change:
    alter table <Your Table>unload priority <Priority You Want>.
    As you have asked in the BW on HANA section I assume you're on BW and you could also have checked this with tx SE14.
    Hopefully the above gives you everything you need.
    Kind Regards,
    Amerjit

  • Process to Upgrade SAP HANA from SPS06 to SPS07 on distributed systems

    Hi Experts,
    We have a requirement to upgrade our SAP HANA system system from SPS06 to SPS07, but I have below queries on the same:
    1) Do we need to consider any Notes before applying the upgrade for pre and post steps (as per my knowledge we have to consider post upgrade note: 1962472, are there any other too?)
    2) As per my knowledge, we just need to stop SAP app systems and then backup of HANA system
    3) Also we have a distributed environment in our landscape (three multiple hosts), what could be the upgrade approach, is it same as like single host environment (as /hana/shared will be the common to all distributed hosts)
    Could anyone clarify my questions please.
    Also for your information we have BW systems on top of HANA DB
    Thanks very much in advance !
    Kind Regards,
    Arun Reddy
    Message was edited by: Tom Flanagan

    Hi John,
    Thank you for the response, our source and target levels are (Source - 1.00.69.00.385196 Target - 1.00.74.03.392810)
    I just wanted to ask you one more quick question here, can't we upgrade the above mentioned versions in one go? or we need to upgrade in two steps like, at first upgrading from 1.00.69.00.385196 to 1.00.73.00.389160  and then to target level 1.00.74.03.392810?
    Also as you said in distributed environment we can upgrade as same like single host environment, so it same like below approach?
    1) stopping the hana database
    2) navigate to media path and then execute hdblcm in gui or command line
    3) select option upgrading existing system
    4) provide the passwords and system poped-up inputs
    5) select the target levels of HANA components
    6) once upgrade done successfully, then need to upgrade hana client in all app servers
    if my above assumption is correct, then I have couple of questions like:
    1) in which host we need to login and perform the upgrade? (as we are having three hosts- HANADB1, HANADB2, HANADB3) , is it okay we can do from any of the three hosts?
    2) Also if upgrade stopped at middle due to errors, do we have option like to start the upgrade from the point where it stop? else we need to restore the lat successful backup and then again need to perform the upgrade from starting point?
    Sorry for asking many questions, i have done the upgrade in single host environment, but not on distributed. So wanted to be make sure with all points to action the activity.
    Thanks for your help and patience in advance !
    Kind Regards,
    Arun Reddy

  • What is a dynamic join in SAP HANA?

    What is a dynamic join in SAP HANA and how it works? Please explain me with the example , how to use it?
    Message was edited by: Tom Flanagan

    Hi Sree,
    In very simple and basic terms:
    If you have table A and B with columns C1, C2, and C3 used in multi-column join (with Dynamic join set to true) from both the tables, then depending upon which columns you select in the query, ONLY those columns will be used in the join.
    For ex. if you select C1, C2 in the select statement, then the join will happen only on C1 and C2, C3 will not be used in the join criteria, even if the join definition involves all the 3 columns.
    Regards,
    Ravi

Maybe you are looking for

  • Converting sql timestamp into a usable java format??

    hi there. does anyone know how to convert an sql timestamp into a usable java format? i have retrieved a timestamp from a mysql table in a jsp script and would like to convert it into the following format: 12:42pm | 08.07.02 as i understand it, i'll

  • RFC Sender Problem...

    Hi all, I configured the RFC Sender communication channel in the XI . The RFC Server parameter and Metadata Repository for this CC, point to a R/3 Application System. In SM59 I configured the TCP/IP connection and it work fine. Was configured, too, t

  • User desicion with BO

    HI, I Build WF with user decision and i want to add some BO  ,i.e. when user get the mail to the sap inbox he also get some additional data(enrich the container ) how i can do that ? i need the steps Thanks in advance Ricardo

  • Importing playlist to another computer

    hi all, i received my mbp today and think its bloody awesome! just had a quick question though. on my xp desktop i have itunes which has not only a playlist for my music but also the playlists of my parents and younger sibling. now that i have my own

  • How can I remove greyed out copies of purchased songs on iPhone

    I have tried everything to try to remove these from my phone and I can't seem to get them to go away. Only certain songs do this. They don't show up in the music on my phone, they just seem to take up space and really bother me, and I would really li