Aggregating Group Values Outside of group

Hi, is this possible?
Basically, I'm trying to show aggregates for a column, by group, on rows that are outside the scope of the group. I'm able to add the total for each group at the end of the group, but I want the totals for each groups at the end of the report instead.
thanks!
Leroy G. Brown

Hi g2beastie,
According to your description, the report is taking long time to render, right?
In Reporting Services, the total time to generate a report include TimeDataRetreval, TimeProcessing and TimeRendering. To analyze which section take much time, we can check the table Executionlog3 in the ReportServer database. For more information, Please
refer to this article:
More tips to improve performance of SSRS reports.
After check out which section costs most of time, then you can refer to this article to optimize your report:
Troubleshooting Reports: Report Performance.
If you have any question, please feel free to ask.
Best regards,
Qiuyun Yu
Qiuyun Yu
TechNet Community Support

Similar Messages

  • Mac Pro - Link Aggregation Group

    Hi community, 
    does any now know in what mode the virtual interface feature LAG (Link Aggregation Group) on a MacPro operates?
    The LACP modes are: On / Active / Passive / Off 
    And what driver mode is used?
    The driver modes are: Round-robin / Active-backup / XOR (balance-xor) / Broadcast / Adaptive transmit load balancing (balance-tlb) / Adaptive load balancing (balance-alb)
    There is nothing to ajust.
    Kind regards,
    Nils

    Link Aggregation only works when the device at the other end of the Links understands the commands of Link Aggregation Protocol. These tend to be Industrial-Strength Switches and Routers, not the affordable ones most Homeowners buy.
    The constraints of the other device, listed in its manual, will determine the best choices.
    Mac OS X 10.6 Server Admin: Setting Up Link Aggregation in Mac OS X Server
    OS X Mountain Lion: Combine Ethernet ports

  • How to access a Matrix cell value outside the matrix in textbox?

    Hi
    I have created a matrix in SSRS. Columns are grouped by Channel Type. I want to access indicated cells value outside the matrix. 
    After Running the report the report shows
    How can I access cell values outside the matrix? Please help

    Hi Aladin92,
    According to your description, there is a matrix in the report, you want to reference the first value of total outside the matrix, right?
    In fact, Report item expressions can only refer to other report items within the same grouping scope or a containing grouping scope. Are the both textboxes in the same group. To workaround the issue, please refer to the following steps:
      1. Click and select the matrix, copy it and paste it in the report above the original matrix.
      2. Right-click handle of the first row in the upper matrix, click Insert Rows, then click Inside Group-Above.
      3. Right-click second cell of the first row, then click Expression.
      4. In the Expression text box, type the expression like below:
    ="A total of "& ReportItems!Textbox5.Value & "out of 258 BC CCRs have been evaluated"
      5. Set the text boxes and rows visibility to be hidden except for the text box with expression.
    The following screenshots are for your reference:
    If you have any more questions, please feel free to ask.
    Thanks,
    Wendy Fu
    Wendy Fu
    TechNet Community Support

  • Exception aggregation - Last Value

    Hi,
    We are having requirement where we have a table in non R/3 system for SKU - Customer level closing stocks on particular day are maintained. We want to create a cube for the table in BI. This table contains entries for SKU + Customer for diff days. We want to show the closing stock of last(/latest) day for which Customer+ SKU record exists. To achieve this we have used Calculated Key figure on Closing stock with Exception Aggregation - Last value with ref to Calendar Day.
    It woks fine only if I have Customer & SKU both used in query. But we have to show these stocks aggregated on Plant level. (Customer are linked to Plants and it is directly avbl in our cube). In this case aggregation does'nt work.
    Lets say following are the closing stocks for SKU- M1 & 3 customers from Plant = 1101
              date-> 1/1/2007  2/1/2007  3/1/2007
    Cust1             10                           15
    Cust2                            20
    Cust3             25            
    In above case if i have to see closing stock on 3/1/2007 for SKU-M1 @plant 1101 (sum of latest closing stock @ all customer sites under plant 1101), i should get = 60 (152025). But it gives me 15 only.
    Could anyone explain how to design a query for this? Or in cube design/uploads it can be managed.
    Regards,
    Vikram.

    Hello Vikram
    One possible solution I can think of is to use an extra ODS in order to hold your data in an aggregated form that suits you. Unfortunately the exceptional aggregation behaviour does not work outside reporting thus in the transfer rules from your data provider to this new ODS you should add a formula checking whether the new record to be transfered to the ODS has a date newer than the ones already transfered in the ODS. Else it should skip the current record and proceed to the next one.
    Another possible solution would be to use the APD and transfer the results of the query you are already playing in an ODS (this way you take advantage of the exceptional aggregation behavior) and by transfering your records to a second ODS you can achieve the aggregation you need.
    Assign points if any of the above helped

  • 'No authorization' error for selection values outside the authorized range

    Hi All,
    We are currently trying to use the authorization analysis concept for 'Cost center reporting'.
    We have made 0COSTCENTER info-object as authorization relevant and have created a analysis authorization object for it through RSECADMIN and we have maintained a single value as '1875' . We have assigned this object to 1 of the test users.
    So now if the user runs the report for cost center '1875' , he is able to view the data/report. Now if he enters any other cost center apart from '1875' than he gets an authorization error (Everything works as per requirement till this point).
    But now if the user enters multiple cost centers like 1875, 1876, 1877 as multiple single values and runs the report, he gets an 'No authorization error'.
    So all the experts, please let me know if it's possible in anyway for the user to see the result/report for the value he is authorized to (in this case - 1875) and should give an information/warning/error message saying that he is not authorized to other cost center (in this case - 1876, 1877).
    Same thing is occuring if user enters a range. Suppose a user is authorized for cost center - 1875 to 1880. Now if he puts multiple single values or range in between the authorized range than he can see the result but if he enters even 1 single value outside the range he gets an error - what I mean by this is - if the user enter a range from 1875 to 1801, he does not get any data display but instead he recieves an error message saying 'No authorization' even though he is authorized for all the cost centers in that range except 1801.
    I would really appreciate your help regarding this. Any comments/suggestions are very welcome.
    Thanks & regards,
    Sunny

    Hi Sunny
    That is the way analysis authorizations work!!
    If you ask for a number of values i.e. cost centers and you don't have authorization to *all* of them you will get a system error as you say.
    There is no way of partially evaluating the query as you suggest (only for the authorized values).
    Try to be less restrictive when defining characteristic values in RSECADMIN.
    In queries use variables with
    Processing by Authorization and Input Ready.
    So the system will tell the user which are the allowed values. In your example the system suggests the range 1875-1880.
    Hope this helps, regards
    Germá

  • Aggregation On Value Based Hierarchy

    Hi
    I am having a problem with aggregation on value based hierarchy
    Well I have a table which will serve as my fact and dimension table
    It is as follows
    ID     Name     MID     Salary
    0     All          
    1     A     0     10000
    2     B     1     9000
    3     C     1     9000
    4     D     1     9000
    5     E     2     8000
    6     F     2     8000
    I created a value based dimension named EMPLOYEE , with child as ID and parent as MID
    I created a cube EMP_SALARY with a measure salary mapped to the Salary column in Employee
    My expectation here is to see the total salary at every level including the salary at that level
    So let us take employee B as an example. He is the manager for employees E and F
    So what I would like to see at level B is sal of B + sal of E + Sal of F = 9000 + 8000 + 8000 = 25000
    But what I get from the cube is 9000. Now is the above possible ? If so please so provide me suggestions
    I can achieve the same by using the following sql query
    select e1.id,rpad('*',2*level,'*') || e1.name,e1.sal,
    select sum(e2.sal)
    from test_emp e2
    start WITH e2.id = e1.id
    CONNECT by PRIOR e2.id = e2.mid
    ) sum_sal
    from test_emp e1
    start with e1.mid is null
    CONNECT by prior e1.id = e1.mid;

    The same basic problem, along with a solution, was discussed in the following thread.
    Re: Value Based Dimension causing Aggrega                         tion problems

  • I'm creating an array using a for loop, how do I get the values outside the loop while it is running? thanks

    I'm creating a set of values using a for loop and a shift register inside the loop. I want access to these values outside the loop as they are being created inside. When I connect a wire from the shift register to a display outside, it doesn't work. How do I do this? Thank you.
    Attachments:
    tamko_new.vi ‏29 KB

    I tried to create the local variable that was wired to numeric indicator inside the loop. If I try to connect this to the analog output outside the loop, the loop just starts blinking and nothing happens, am I doing something wrong? Thanks again.
    Attachments:
    tamko_new2.vi ‏29 KB

  • Can I set camera registers in version 1.1 of the IMAQ 1394 driver to values outside of the DCAM 1.3 specs?

    I have an imaging system that includes a Pt Grey Dragonfly camera. I know that using the manufactures SDK, I can set different camera parameters outside of the DCAM specs by setting camera registers. If I am using version 1.1 of the IMAQ 1394 driver can I set, say the frame rate to a value outside the DCAM specs?

    http://exchange.ni.com/servlet/ProcessRequest?RHIVEID=101&RNAME=ViewQuestion&HOID=5065000000080000009B580000&ECategory=Vision

  • Aggregated groups on a timeline

    Hi
    I need to sub aggregate and group data in a time sequence.
    Example data is like follows, but many more records, and with fractions of datevalues.
    TSZ_ENTRY________STORAGEID______MTRLID____CAMPAIGNID____WEIGHT_CORR
    23-MAY-11 12.00.00.000000000 AM EUROPE/BERLIN     1     0     20     61.2397
    24-MAY-11 12.00.00.000000000 AM EUROPE/BERLIN     1     0     20     61.3617
    25-MAY-11 12.00.00.000000000 AM EUROPE/BERLIN     1     0     20     61.4837
    26-MAY-11 12.00.00.000000000 AM EUROPE/BERLIN     4     0     20     61.6058
    27-MAY-11 12.00.00.000000000 AM EUROPE/BERLIN     1     0     20     61.7277
    28-MAY-11 12.00.00.000000000 AM EUROPE/BERLIN     1     0     20     61.8497
    29-MAY-11 12.00.00.000000000 AM EUROPE/BERLIN     1     4     20     49.8715
    30-MAY-11 12.00.00.000000000 AM EUROPE/BERLIN     1     0     20     62.2156
    31-MAY-11 12.00.00.000000000 AM EUROPE/BERLIN     2     0     20     62.3377
    01-JUN-11 12.00.00.000000000 AM EUROPE/BERLIN     2     4     20     62.4596
    02-JUN-11 12.00.00.000000000 AM EUROPE/BERLIN     2     4     20     62.5816
    03-JUN-11 12.00.00.000000000 AM EUROPE/BERLIN     2     4     20     62.7036
    04-JUN-11 12.00.00.000000000 AM EUROPE/BERLIN     3     4     20     62.8256
    05-JUN-11 12.00.00.000000000 AM EUROPE/BERLIN     1     4     20     62.9475
    06-JUN-11 12.00.00.000000000 AM EUROPE/BERLIN     1     0     20     63.0696
    07-JUN-11 12.00.00.000000000 AM EUROPE/BERLIN     1     0     20     63.1915
    08-JUN-11 12.00.00.000000000 AM EUROPE/BERLIN     1     0     20     63.3135
    09-JUN-11 12.00.00.000000000 AM EUROPE/BERLIN     1     6     20     63.4356
    10-JUN-11 12.00.00.000000000 AM EUROPE/BERLIN     1     6     20     63.5575
    11-JUN-11 12.00.00.000000000 AM EUROPE/BERLIN     3     6     20     63.6795
    12-JUN-11 12.00.00.000000000 AM EUROPE/BERLIN     1     6     21     63.8015
    13-JUN-11 12.00.00.000000000 AM EUROPE/BERLIN     1     6     21     63.9235
    14-JUN-11 12.00.00.000000000 AM EUROPE/BERLIN     1     6     21     64.0454
    15-JUN-11 12.00.00.000000000 AM EUROPE/BERLIN     3     0     21     64.1676
    16-JUN-11 12.00.00.000000000 AM EUROPE/BERLIN     1     0     21     64.2895
    17-JUN-11 12.00.00.000000000 AM EUROPE/BERLIN     2     2     21     64.4116
    18-JUN-11 12.00.00.000000000 AM EUROPE/BERLIN     1     2     21     64.5335
    19-JUN-11 12.00.00.000000000 AM EUROPE/BERLIN     1     2     21     64.6555
    20-JUN-11 12.00.00.000000000 AM EUROPE/BERLIN     1     2     21     64.7775
    This data should be presented like:
    STORAGEID     MTRLID     CAMPAIGNID     SUM(WEIGHT_CORR) COUNT(*)
    1 0 20 184.0851 3
    4     0     20     61.6058 1
    1     0     20     123.5774 2
    1     4     20     49.8715 1
    1     0     20     62.2156 1
    2     0     20     62.3377 1
    2     4     20     187.7448 3
    It is like doing a new sub aggregation for any change in the first 3 columns:
    select storageid, materialid, campaignid, sum(weight_corr), count(*)
    from table....
    group by storageid, materialid, campaignid
    but not aggregating the complete table in one piece.
    .. and finally the overall row sequence order is by TSZ_ENTRY.
    TSZ_ENTRY is of type TIMESTAMP WITH TIME ZONE. All other cols are integers. Database is 11.2
    I have tried the OVER something, but my skills is not deep enough to do this correctly.
    I'm sorry that all blanks in the example gets truncated.
    Any suggestions?
    BR
    /Per

    perforsgren wrote:
    Do you believe that this is possible by using the same tabibitosan method?As I said earlier, yes, it's eminently possible, by the addition of the "partition by storageid" clause in the first row_number().
    eg:
    with sample_data as (select to_timestamp('23/05/2011', 'dd/mm/yyyy') tsz_entry, 1 storageid, 0 mtrlid, 20 campaignid, 61.2397 weight_corr from dual union all
                         select to_timestamp('24/05/2011', 'dd/mm/yyyy') tsz_entry, 1 storageid, 0 mtrlid, 20 campaignid, 61.3617 weight_corr from dual union all
                         select to_timestamp('25/05/2011', 'dd/mm/yyyy') tsz_entry, 1 storageid, 0 mtrlid, 20 campaignid, 61.4837 weight_corr from dual union all
                         select to_timestamp('26/05/2011', 'dd/mm/yyyy') tsz_entry, 4 storageid, 0 mtrlid, 20 campaignid, 61.6058 weight_corr from dual union all
                         select to_timestamp('27/05/2011', 'dd/mm/yyyy') tsz_entry, 1 storageid, 0 mtrlid, 20 campaignid, 61.7277 weight_corr from dual union all
                         select to_timestamp('28/05/2011', 'dd/mm/yyyy') tsz_entry, 1 storageid, 0 mtrlid, 20 campaignid, 61.8497 weight_corr from dual union all
                         select to_timestamp('29/05/2011', 'dd/mm/yyyy') tsz_entry, 1 storageid, 4 mtrlid, 20 campaignid, 49.8715 weight_corr from dual union all
                         select to_timestamp('30/05/2011', 'dd/mm/yyyy') tsz_entry, 1 storageid, 0 mtrlid, 20 campaignid, 62.2156 weight_corr from dual union all
                         select to_timestamp('31/05/2011', 'dd/mm/yyyy') tsz_entry, 2 storageid, 0 mtrlid, 20 campaignid, 62.3377 weight_corr from dual union all
                         select to_timestamp('01/06/2011', 'dd/mm/yyyy') tsz_entry, 2 storageid, 4 mtrlid, 20 campaignid, 62.4596 weight_corr from dual union all
                         select to_timestamp('02/06/2011', 'dd/mm/yyyy') tsz_entry, 2 storageid, 4 mtrlid, 20 campaignid, 62.5816 weight_corr from dual union all
                         select to_timestamp('03/06/2011', 'dd/mm/yyyy') tsz_entry, 2 storageid, 4 mtrlid, 20 campaignid, 62.7036 weight_corr from dual union all
                         select to_timestamp('04/06/2011', 'dd/mm/yyyy') tsz_entry, 3 storageid, 4 mtrlid, 20 campaignid, 62.8256 weight_corr from dual union all
                         select to_timestamp('05/06/2011', 'dd/mm/yyyy') tsz_entry, 1 storageid, 4 mtrlid, 20 campaignid, 62.9475 weight_corr from dual union all
                         select to_timestamp('06/06/2011', 'dd/mm/yyyy') tsz_entry, 1 storageid, 0 mtrlid, 20 campaignid, 63.0696 weight_corr from dual union all
                         select to_timestamp('07/06/2011', 'dd/mm/yyyy') tsz_entry, 1 storageid, 0 mtrlid, 20 campaignid, 63.1915 weight_corr from dual union all
                         select to_timestamp('08/06/2011', 'dd/mm/yyyy') tsz_entry, 1 storageid, 0 mtrlid, 20 campaignid, 63.3135 weight_corr from dual union all
                         select to_timestamp('09/06/2011', 'dd/mm/yyyy') tsz_entry, 1 storageid, 6 mtrlid, 20 campaignid, 63.4356 weight_corr from dual union all
                         select to_timestamp('10/06/2011', 'dd/mm/yyyy') tsz_entry, 1 storageid, 6 mtrlid, 20 campaignid, 63.5575 weight_corr from dual union all
                         select to_timestamp('11/06/2011', 'dd/mm/yyyy') tsz_entry, 3 storageid, 6 mtrlid, 20 campaignid, 63.6795 weight_corr from dual union all
                         select to_timestamp('12/06/2011', 'dd/mm/yyyy') tsz_entry, 1 storageid, 6 mtrlid, 21 campaignid, 63.8015 weight_corr from dual union all
                         select to_timestamp('13/06/2011', 'dd/mm/yyyy') tsz_entry, 1 storageid, 6 mtrlid, 21 campaignid, 63.9235 weight_corr from dual union all
                         select to_timestamp('14/06/2011', 'dd/mm/yyyy') tsz_entry, 1 storageid, 6 mtrlid, 21 campaignid, 64.0454 weight_corr from dual union all
                         select to_timestamp('15/06/2011', 'dd/mm/yyyy') tsz_entry, 3 storageid, 0 mtrlid, 21 campaignid, 64.1676 weight_corr from dual union all
                         select to_timestamp('16/06/2011', 'dd/mm/yyyy') tsz_entry, 1 storageid, 0 mtrlid, 21 campaignid, 64.2895 weight_corr from dual union all
                         select to_timestamp('17/06/2011', 'dd/mm/yyyy') tsz_entry, 2 storageid, 2 mtrlid, 21 campaignid, 64.4116 weight_corr from dual union all
                         select to_timestamp('18/06/2011', 'dd/mm/yyyy') tsz_entry, 1 storageid, 2 mtrlid, 21 campaignid, 64.5335 weight_corr from dual union all
                         select to_timestamp('19/06/2011', 'dd/mm/yyyy') tsz_entry, 1 storageid, 2 mtrlid, 21 campaignid, 64.6555 weight_corr from dual union all
                         select to_timestamp('20/06/2011', 'dd/mm/yyyy') tsz_entry, 1 storageid, 2 mtrlid, 21 campaignid, 64.7775 weight_corr from dual),
         tabibitosan as (select tsz_entry,
                                storageid,
                                mtrlid,
                                campaignid,
                                weight_corr,
                                row_number() over (partition by storageid order by tsz_entry) - row_number() over (partition by storageid, mtrlid, campaignid order by tsz_entry) grp
                         from   sample_data)
    select storageid,
           mtrlid,
           campaignid,
           sum(weight_corr) sum_weight_corr,
           count(*) total
    from   tabibitosan
    group by storageid,
             mtrlid,
             campaignid,
             grp
    order by storageid, min(tsz_entry);
    STORAGEID     MTRLID CAMPAIGNID SUM_WEIGHT_CORR      TOTAL
             1          0         20        307.6625          5
             1          4         20         49.8715          1
             1          0         20         62.2156          1
             1          4         20         62.9475          1
             1          0         20        189.5746          3
             1          6         20        126.9931          2
             1          6         21        191.7704          3
             1          0         21         64.2895          1
             1          2         21        193.9665          3
             2          0         20         62.3377          1
             2          4         20        187.7448          3
             2          2         21         64.4116          1
             3          4         20         62.8256          1
             3          6         20         63.6795          1
             3          0         21         64.1676          1
             4          0         20         61.6058          1Ie. instead of comparing each storageid, mtrlid and campaignid against the whole ordered set of data, you're now comparing it against the ordered set per storageid.
    2 views will give you what you want, if you're needing both types of results, but bear in mind with the storageid one, you may need to switch to an inline view in order for the predicate pushing to happen (ie. calculating the results for the specific storageid(s) passed in, rather than calculating the whole set and then filtering).

  • Referencing Aggregated Column Value in Where Clause

    Hello -
    I'm trying to determine how I can accomplish the following in the most straightforward, efficient way.
    Among other things, I'm selecting the following value from my table:
    max(received_date) as last_received_dateI also need to evaluate the "last_received_date" value as a condition in my where clause. However, I can't reference my aliased "last_received_date" column value, and when I try to evaluate max(received_date) in the where clause, I get the "group function is not allowed here" error.
    Does anyone know of a good workaround?
    Thanks,
    Christine

    Hi,
    Column aliases can be used in the ORDER BY clause: aside from that, they cannot be used in the same query where they are defined. The workarounds are
    (a) define the alias in a sub-query, and use it in a super-query, like Someoneelse did, or
    (b) repeat the aliased expression, as in the HAVING-clause, below.
    Aggregate functions are computed after the WHERE-clause. (That explains why you can do things like
    SELECT  MAX (received_date) last_received_date_2008
    FROM    table_x
    WHERE   TO_CHAR (received_date, 'YYYY')  = '2008';).
    The HAVING-clause is like the WHERE-clause, but it is applied after the aggregate functions are computed, e.g.
    SELECT    deptno
    ,         MAX (recieved_date)  AS last_received_date
    FROM      table_x
    GROUP BY  deptno
    HAVING    MAX (received_date)    > SYSDATE - 7   -- Only show deptartments with activity in the last week
    ;

  • Diagram of sRGB values outside of the sRGB gamut?

    When I have sRGB selected as my color space in Photoshop (CS6), often I see the out of gamut warning in the color picker.  I expected every possible sRGB color value to be within the sRGB color gamut, but apparently I was wrong.  I would like to see a horseshoe color space diagram that shows how far the values extend beyond the gamut.
    I guess this occurs with all color spaces - you can select color values that are outside of the gamut.  I'm not sure what the color space diagrams represent, but I'm guessing they represent the gamut. 
    Thanks!

    It may help your understanding to know the lay of the land a bit better...
    Most users are using Internet Explorer, which is HALF color-managed and reads the document color profile - but assumes the monitor is sRGB, so that anyone using IE with a monitor that's reasonably close to sRGB will see reasonable colors and those who have wide-gamut monitors will always see inaccurate (oversaturated) colors.
    Safari and Firefox do use both the document and monitor profiles, and assuming the profile actually DOES describe the monitor performance (which I htink is a stretch for the majority of systems) can show proper color.  Firefox has some advanced settings to allow a user to set it to a very rational strategy, in which not only images but web page elements are also color-managed, but that's not the default.
    I don't know about Chrome's color-management capabilities.  I'm generally allergic to running Google malware software so I don't test it much.
    But know that color profiles are complex beasts, and can actually be (mis)crafted so that some or all color-management doesn't actually work right.
    Gary Ballard (aka gator_soup) recommends publishing web images without a color profile, claiming this can yield a greater likelihood that people will see your images as intended, but I'm not sure I buy into that argument.  Personally I lean toward getting the color right in Photoshop, publishing with an embedded sRGB profile, and not worrying about it any further.  Anyone who cares about color will work things out on their own system so they're seeing proper color, and as for the others, well, as long as red things are vaguely reddish, and blue things bluish, what's to worry about?
    At the end of the day, there are probably more "standard gamut" monitors (i.e., not wide gamut) out there than anything else, so the likelihood that an image will be seen on a device that's sRGB-like is probably higher than all else.  This may be why Microsoft feels comfortable with leaving IE broken as it is, though frankly the same thing would be accomplished if they used the monitor profile, since Windows defaults to associating sRGB IEC61966-2.1 with monitors.
    If your prime goal is web publishing, consider just making sRGB your working profile of choice, then watch your histograms when you edit to ensure you don't clip black points of any of the channels.  Working in sRGB you cannot have an out-of-sRGB gamut color, so that's a non-issue.
    By the way, you can show (as a status) what the current color space of your document is in a couple of ways:  The little status box at the lower-left of the document editing area, or via the Info Panel.
    -Noel

  • Before aggregation - incorrect values

    Hello,
    I am using Calculated Key Figure with time of aggregation calculation = "before aggregation" and then I use this CKF in columns. Some rows are calculated correctly, but some of them are not. And when I check the key figure definition for those incorrect in monitor query I can see the way how it is calculated and everything seems to be OK in the row 1 (picture) , but in row 2 the value is incorrect.
    <a href="http://raisik.webz.cz/key_fig1.JPG">http://raisik.webz.cz/key_fig1.JPG</a>
    Any idea why?
    Thanks for any help.
    Regards
    Pavel

    Hi Agrima,
                Check here.....
    http://help.sap.com/saphelp_nw04/helpdata/en/6f/56853c08c7aa11e10000000a11405a/content.htm
    Thanks,
    Vijay.

  • Mobile List Bind aggregation - repeated values?

    Hi all, i'm new to SAPui5 so please forgive me if this is already discussed.
    I'm calling a odata service (http://h03zdb.hcc.uni-magdeburg.de:8003/workshop/sessiona/12/Services/msgTab.xsodata/MSGTABLE?$filter=FROM%20eq%20%27STU…) and I get the values stored in a table in json. I want to show this in a list but there are repeated values created for the list. Is it because of bindAggregation?

    Here is the whole code.
    I used split app and added two models for the detail and master page.
    I'm very new to this so sorry for bad coding
    <html>
      <head>
      <meta http-equiv="X-UA-Compatible" content="IE=edge" />
      <title>SRH Chat</title>
      <script src="/sap/ui5/1/resources/sap-ui-core.js"
      id="sap-ui-bootstrap"
      data-sap-ui-libs="sap.m"
      data-sap-ui-theme="sap_bluecrystal">
      </script>
      <!-- only load the mobile lib "sap.m" and the "sap_bluecrystal" theme -->
      <script type="text/javascript">
      var sessionKey = "STUDENT01";
      // Login user to get session id
      var oDetailPage2 = new sap.m.Page("detail2");
      var url = "http://h03zdb.hcc.uni-magdeburg.de:8003/workshop/sessiona/12/Services/msgTab.xsodata/MSGTABLE?$filter=TO%20eq%20%27"+sessionKey+"%27%20&%20FROM%20eq%20%27"+sessionKey+"%27%20&$format=json";
      $.get( url, function(data1) {
      // create JSON model instance
      var oModel = new sap.ui.model.json.JSONModel();
      // set the data for the model
      oModel.setData(data1);
      // set the model to the core
      oMasterPage1.setModel(oModel);
      var contactList = new sap.m.List("newlist2",{
      title : "{FROM}",
      mode: sap.m.ListMode.SingleSelectMaster,
                select: function(oEv) {
      var item = oEv.getParameter("listItem");
      var desc = item.getTitle();
      retrieveMsgs(desc);
      function retrieveMsgs(desc)
      var aurl = "http://h03zdb.hcc.uni-magdeburg.de:8003/workshop/sessiona/12/Services/msgTab.xsodata/MSGTABLE?$filter=FROM%20eq%20%27"+desc+"%27%20&$format=json";
      $.get( aurl, function( data ) {
    // // create JSON model instance
      var oModel1 = new sap.ui.model.json.JSONModel();
    // // set the data for the model
      oModel1.setData(data);
      // set the model to the core
      oDetailPage2.setModel(oModel1);
      var contactList1 = new sap.m.List("newlist");
      var inputField = new sap.m.TextArea();
      var but1 = new sap.m.Button({text: "Send", tap: function(){
      // Send button press functionality
      var msg = inputField.getValue();
      var murl = "http://h03zdb.hcc.uni-magdeburg.de:8003/workshop/sessiona/12/WebContent/InsertMsg.xsjs?from="+desc+"&to="+sessionKey+"&msg="+msg;
      $.ajax({
            url: murl,
            type: 'POST',
            async: false,
            timeout: 30000,
            success : function (data2){
            var empty = "";
            inputField.setValue(empty);    // clear text area
      //Populate the List with Data from the Model (Set in the Controller of this View)
      contactList1.bindAggregation("items", {
      path : "/d/results", //Reference to the JSON structure
      template: new sap.m.StandardListItem({
      title: "{FROM}", //Refer the name field in the JSON data
      description: "{MSG}", //Address Field in the data
      type: sap.m.ListType.Navigation //Specify what to do while clicking/tapping an item in the list, in this example navigate to another view
      oDetailPage2.addContent(contactList1);
      oDetailPage2.addContent(inputField);
      oDetailPage2.addContent(but1);
      oSplitApp.to("detail2");
      //Populate the List with Data from the Model (Set in the Controller of this View)
      //oModel.createBindingContext("/d/results", null, {select: "FROM"},function(a){});
      contactList.bindAggregation("items", {
      path : "/d/results", //Reference to the JSON structure
      template: new sap.m.StandardListItem({
      title: "{FROM}", //Refer the name field in the JSON data
      //description: "{FROM}", //Address Field in the data
      //select: "FROM,TO",
      type: sap.m.ListType.Navigation //Specify what to do while clicking/tapping an item in the list, in this example navigate to another view
      var oMasterPage1 = new sap.m.Page("master1",{
        title : "Master"
      // adding the list to master
      oMasterPage1.addContent(contactList);
    // //add the master pages to the splitapp control
      oSplitApp.addMasterPage(oMasterPage1);
    //   .addMasterPage(oMasterPage1);
      //add the detail pages to the splitapp control
      oSplitApp.addDetailPage(oDetailPage2);
      //oSplitApp.setInitialDetail("detail");
      //oSplitApp.setInitialMaster("master");
      oSplitApp.placeAt("body");
      </script>
      </head>
      <body class="body">
      <div id="body">
      </div>
      </body>
    </html>

  • How do i access param/config values outside of war/ejb's

    Hello,
    I have some server specific values that I would like to look up in my application. But rather than doing a build specific to each server, I would like to store them in a properties file or something like that. What would you recommend? I was thinking of something like a properties file in the j2ee/home/lib directory but that doesn't seem to work. This would be some settings that apply to all 3 tiers (web/ebj/app-clients).
    For example, we have email addresses in the web.xml. They are different for dev-test vs prod environments. Rather than change the web.xml for each environment's EAR, I would like a location or way to look up that information from the server.

    so far, the only way I can think of doing this is to create a jar file that only contains a .properties file and then drop that into /j2ee/home/lib. Any better solutions?

  • OA_MEDIA referencing a value outside the Context Element

    I am attempting to use XML Publisher to change a logo image at the top of a template. I have used the following command with great success:
    url:{concat('${OA_MEDIA}’,’/',//C_PAYGROUP',.jpg')}
    Unfortunately, this works fine when I have C_PAYGROUP in the Context Element of the XML output I am referencing... in reality this is not the case as I need to get to the absolute path and specify my C_PAYGROUP value from the top level down... i.e:
    /HBOSRA0830/LIST_G_SEL_CHECKS/G_SEL_CHECKS/C_PAYGROUP
    I have tried the following syntax to no avail and hope someone will be able to help me out a little...
    url:{concat('${OA_MEDIA}/',/HBOSRA0830/LIST_G_SEL_CHECKS/G_SEL_CHECKS/C_PAYGROUP,'.jpg')}
    Many many thanks,
    Terence

    Hi Terence,
    Looks like a slight problem with where the slashes etc are.
    The following will work - note the slash after OA_MEDIA as a seperate field in the concat.
    url:{concat('${OA_MEDIA}','/',/HBOSRA0830/LIST_G_C_PAYGROUP/G_C_PAYGROUP/C_PAYGROUP,'.jpg')}
    Robert

Maybe you are looking for