Last member based on date in Performance point Scorecard

Hello,
I have a scorecard in performance point with data parameter Month-Year. We have data for few months only like March, Aug and Dec for 2014. On my date drop down parameters, we have last 12 months Month year list. Like May 2014-Apr 2015.  Now when user
select June 2014, I need to display data for March 2014 as we dont have data for June 2014. If user selects march 2015, then we need to display Dec 2014 data as we have Dec 2014 data. How can I do that in Performance point? Do I have to write logic in KPI
custom MDX or I  have to write logic in connection formula in performance point dashboard? what logic I need to write?
Thank you!

Hi Satish,
Have you tried below:
Create Month1 object as =Month([Month]) and [total]as =Sum([Measure])
Then drag both Month1 and total objects in report block.
Regards,
Yuvraj

Similar Messages

  • How to get annotations from Performance Point Scorecard out from Share Point database

    Hi,
    I have a requirement to put the annotations from a Performance Point Scorecard on a reporting services report.
    I see where the Annotations are in the database, but I don't see out to get the Guid or ID associated to the Scorecard name.
    Can someone provide the relationship for the tables?
    David Botzenhart

    Hi,
    I have a requirement to put the annotations from a Performance Point Scorecard on a reporting services report.
    I see where the Annotations are in the database, but I don't see out to get the Guid or ID associated to the Scorecard name.
    Can someone provide the relationship for the tables?
    David Botzenhart

  • Color and styles of the performance point scorecard changing between designer and published version.

    Greetings specialists!
    I'm experiencing a problem with one of my scorecards. I designed the scorecard using performance point dashboard designer and used it to color the top level KPI letter white. The rest of the letters is using a dark grey, but when I use the socrecard in a
    webpart in a performance point webpage all the letters appear colored in dark grey including the top level kpi letters which in the designer have been parametrised to use white.
    Is it some sort of problem or known bug? It there any reason that can explain this erratic behaviour? Maybe it's inheriting the style from something else??! Any ideas clues or something I can do to help solve the problem?
    Thanks in advance!

    The first thing will be to determine what is overriding the colors. Follow this guide to select the object that is the wrong color and post a screenshot of the CSS styles that are being applied (in the right panel).
    http://msdn.microsoft.com/en-us/library/dd565627(v=vs.85).aspx
    Dimitri Ayrapetov (MCSE: SharePoint)

  • Performance Point Services Data Source Sql Server Table.

    Hi,
    I have a requirement where I need to create a matrix report which shows resource capacity and deomand fromProject Server 2010 databases which involves joining of multiple tables and views from the database.
    But in Performance Point Service 2010 I am not able to write query to join multiple tables it'sjust showing one option in Data Connection to create connection to a database and specific one table not like SSRS where we can connect to database and can query
    multiple tables and views to render the report as required.
    So can any body suggest in case I am missing something in PPS 2010 or we cant joing or write sql query in PPS 2010
    Thanks

    I would join the tables in a view as Regis suggested or simply pull the information into a PowerPivot model and use that in SharePoint to connect to in PPS.  If you go that route then you will be able to use a multidimensional source and can create
    Analytical Reports.
    http://denglishbi.wordpress.com/2011/01/03/using-powerpivot-with-performancepoint-services-pps-2010/
    http://denglishbi.wordpress.com/2012/02/09/using-performancepoint-services-pps-with-powerpivot-sql-server-2012-rc0/

  • Performance Point Services data source pointing towards wrong datasource after production instance copied in development

    Hi
    We have copied our Project Server 2010 production databases to development server. We have few reports created by using Performance Point Services, but here when we try to open the reports the data source is  pointing towards production data source
    instead of dev.
    We are getting some issue while executing the reports
    Request you to let me know what exactly the cause and steps to rectify the issue.
    Thanks 
    Geeth If you feel that the answer which i gave you is Helpful please select it as Answer/helpful.

    As you have copied data from Prod to Dev reports are pointing to production data sources and reports are unable to display data. Whenever we copy the data from one environment to other then this kind if issue occurs.
    We need to modify the data source of all the reports manually so that report point to dev data source.
    You have to open data source for your reports and change it from prod to dev then reports will display correct data.
    http://blogs.msdn.com/b/performancepoint/archive/2011/09/19/performancepoint-data-connection-libraries-and-content-lists-in-sharepoint.aspx
    http://www.networkworld.com/community/node/57687
    kirtesh

  • For performance point, in data loading what we need to do?

    Hi,
    1.for performance point, in data loading what we need to do?
    2.How to increase Cube performance & query Performance?
    Thanks,
    Basava Raju

    Hi,
    In addition to the above steps, especially for data loading, make sure that if you have custom code, its enhanced for performance both on the R3 side and the BI side as well.
    For generic data sources, try to make them generic deltas or try pseudo deltas for them.
    From a query pespective, in filters, try putting include conditions wheree ver possible instead of exclusions.
    Cheers,
    Kedar

  • In an Analyzer report how can I filter based on the last member of a dimension?

    Using Analyzer 6.2.1What I'm trying to do is to create a report that shows always the last month available on the cube.I'd rather like to avoid having to add UDA or Alias or whatever in my outline.Is there a way to retrieve the last member of a dimension directly in Analyzer?

    "Substitution Variables" can do what you are asking for.

  • Performance Point - 1 Dashboard, 2 cubes, 1 Time intelligence filter, filtering 2 different date dimensions on different cubes!

    Hi
    Hope you are well.
    I have a challenge. I am working with Performance Point and a number of different cubes.
    In this case I need to include graphs and charts from 2 different cubes on one dashboard page.
    The challenge I have, is to be able to use a Time Intelligence filter from one cube, eg, [TimeSold].[Calendar hierarchy] on Cube 1, and for this to filter graphs from the second cube (second cube [Time].[Calendar hierarchy]), on one dashboard page. 
    Is there any way I can achieve this using MDX or Parameters within performance point?
    I cannot amend the cubes or dimensions. I have to work with the front end tools only.

    Hi ShebUK,
    I don't think we can achieve this requirement on the front end, but you can discuss this issue at the following forum:
    http://social.technet.microsoft.com/Forums/en-US/home?category=performancepointserver
    I'm not familiar with PerformancePoint Services and not sure that we can implement linked dimension to achieve this. A linked dimension is one that exists in one Analysis Services database, but reused in another Analysis Services database of the same version
    and compatibility level. For more information, please see:
    Define Linked Dimensions: http://msdn.microsoft.com/en-us/library/ms175648.aspx
    Hope this helps. 
    Regards,  
    Elvis Long
    TechNet Community Support

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • Performance Point Dashboard Designer filter and SSRS parameter problem

    I've created reports in SSRS, with parameters, but can't get the filter in Performancepoint to have any affect on the SSRS parameter/reports. My SSRS reports are using shared datasets I've created, with no cubes. My SSRS parameter is text data type.
    In the Dashboard Content window, I have the filter in the header, and have drug (at different times) the display value and the member unique name to the report.
    In this header area (with filter), clicking the down arrow, there is no Edit Connection option. Should there be? Or would that only be to connect to a higher up filter? When clicking Create Connection, under the Items tab, 'get values from' lists header
    and Date and is greyed out, I can put the report name in 'send values to', but under Connect To, there are no options. Why? And the OK button is greyed out.
    In the report area, the filter name (Date) is listed underneath Connections. And right clicking the filter for this connection and clicking Edit Connection, under the Items tab, under 'get values from', it says header and Date, and under 'send values to',
    zone 1 - and the report name is listed. Under the Values tab, the parameter name (DateNameParm) is listed in the Connect To, and I've tried both member unique name and display value for source value.
    There's another thread on here titled "Performance Point - Dashboard Filter Not Working with SSRS Reporting Parameters" (won't let me post link), where someone here says they had to change the filter data type
    from GUID, but I see no place to do this.

    Hi Roberto,
    Dashboard layout and sizing in PerformancePoint is very flexible. This article covers nearly all of the sizing options available, one basic dashboard page scenario and a few common questions.
    http://blogs.msdn.com/b/performancepoint/archive/2008/01/07/performancepoint-dashboard-sizing-explained.aspx
    Please go to the last question and answer: “filters are locked to auto-sizing,
    so their width will override the size of their parent zone. Minimum filter width may be adjusted in the ParameterTree.css file. See the article
    Working with the Monitoring CSS Files to Adjust Settings in Deployed Dashboards on the PerformancePoint team blog for more detail.”
    As a best practice, put dashboard filters in a separate zone from where you put reports and scorecards. Otherwise, items such as chart legends or reports might not be displayed correctly.
    Regards,
    Rebecca Tu
    TechNet Community Support

  • Issue with Performance Point Filters and SSRS when visibility is toggled

    I have a SSRS report that has the visibility toggled with the (+/-) sign. The report is SQL Server 2008 R2 / SharePoint Integrated in a SharePoint 2010 Site. The reports datasource is an Analysis Services database.
    I am displaying the report in a Performance Point Dashboard and am connecting it to Performance Point filters using the member unique name. The report filters correctly when the dashboard filters change. However, when I click to expand a plus sign to toggle
    the visibility, the report goes back to it's default filters of '[Dimension].[All]' and all data is returned, and the visibility is not expanded.
    I have confirmed that the report visibility toggling works in BIDS and via the report that is stored in SharePoint. I have also confirmed that it works when the report is stored in the native mode report server, and integrated with Performance
    Point in the same manner.
    So, I 'think' I have whittled the problem down to be the combination of SharePoint Integrated and Performance Point.
    Anyone have any ideas?
    Jewels

    Hi,
    I am having a similar issue with PerformancePoint SSRS and visibility togle. Were you able to figure this out?
    I have a report which calls a sub report. When I click on the report it passes parameters to sub report which works fine. But as soon as I click the visibility toggle, it goes back to default. Strange thing is that it works in PerformancePoint as a 
    stand alone report (sub report alone in PP). Just when it is a sub report and parameters are passed through url this issue happens.
    Any suggestions are welcome. I have spent hours trying to figure this one out.

  • Performance Point Filter Scorecard by Time Dimension (without Time Intelligence)

    Hello,
    I use Performance Point 2010 and want to build a Performance Point Dashboard with Scorecard. My requirement is to provide a List of Years as a Filter for a Scorecard.
    In the Scorecard I have a KPI which I want filter by Time Dimension. I can do this when I use "Time Intelligence" and "Time Intelligence Connection". But I want to use the "Member Selection Filter" to filter the Scorecard
    KPIs by Years (Selected from a List).
    I created a "Member Selection Filter" with certain Time Dimension and put Filter to a Dashboard. In my KPI Mapping Source I added a "new Dimension Filter" and choosed the same Time Dimension as
    in the "Member Selection Filter".
    When I try to connect the "Member Selection Filter" to my Scorecard, I can only choose the "Curent Date Time". As I know "Current Date Time" have to be used when Time Intelligence Filter is in use.
    How I have to connect the Scorecard with the "Member Selection Filter"?
    Thank you very much

    Hi ShebUK,
    I don't think we can achieve this requirement on the front end, but you can discuss this issue at the following forum:
    http://social.technet.microsoft.com/Forums/en-US/home?category=performancepointserver
    I'm not familiar with PerformancePoint Services and not sure that we can implement linked dimension to achieve this. A linked dimension is one that exists in one Analysis Services database, but reused in another Analysis Services database of the same version
    and compatibility level. For more information, please see:
    Define Linked Dimensions: http://msdn.microsoft.com/en-us/library/ms175648.aspx
    Hope this helps. 
    Regards,  
    Elvis Long
    TechNet Community Support

  • Performance Point 2007. Filter seems to split query

    Hi all,
    I am having an issue with a performance point 2007 filter.  It is based on a hierarchy of 4 levels.  Total amount of records is under 500 (about 450) so i shouldnt have the filter limit issue.
    However the filter is stopping about 3/4 the way through the list on our production server (QA and Dev are fine).  I have looked for the filter limit in web config - cant find it on any server.
    So I have run SQL profiler on the cube while running PP.  I can see the queries come in and its splitting the filter query into two queries in exactly the point where we are seeing the missing data.
    Any ideas why?
    Many thanks
    Dan

    No the issues is still not resolved but I convienced my client to use alternate option which is SSRS Reports with MDX.
    Even I tried the below configuration but issues not resolved
      <system.serviceModel>
        <client>
    <commonBehaviors>   
    <behaviors>     
    <endpointBehaviors>       
    <dataContractSerializer MaxItemsInObjectGraph="2147483647" />     
    </endpointBehaviors>   
    </behaviors> 
    </commonBehaviors> 
          <endpoint
            name="http"
            contract="Microsoft.SharePoint.Taxonomy.IMetadataWebServiceApplication"
            binding="customBinding"
            bindingConfiguration="MetadataWebServiceHttpBinding" />
          <endpoint
    Regards,
    adeel
    Muhammad Adeel

  • Please validate my logic performance point of view:

    Please validate my logic performance point of view:
    logic I wrote :
       LOOP AT i_mara INTO wa_mara.
    *-----For material description, go to makt table.
          SELECT SINGLE maktx
            FROM makt
            INTO l_maktx
           WHERE matnr = lwa_mara-matnr
             AND SPRAS = 'E'.
          IF sy-subrc = 0.
            wa_mara-MAKTX = l_maktx.
          ENDIF.        " IF sy-subrc = 0.
    *-----For Recurring Inspection, go to marc table.
          SELECT prfrq
            FROM marc
            INTO l_prfrq
            UP TO 1 ROWS
           WHERE matnr = lwa_mara-matnr.
          ENDSELECT.
          IF sy-subrc = 0.
            wa_mara-prfrq = l_prfrq.
          ENDIF.          " IF sy-subrc = 0.
          MODIFY TABLE i_mara FROM wa_mara
                 TRANSPORTING maktx.
          CLEAR : wa_mara.
       ENDLOOP.   " LOOP AT i_mara INTO wa_mara.
    Or is it better below : ?
    To SELECT all the maktx values from makt and all prfrq values from marc
    in two internal tables and
    Loop at i_mara.
      LOOP at all maktx itab
    and pass corresponding maktx values into i_mara table
    and pass corresponding prfrq values into i_mara table
    ENDLOOP.
    OR
    is there any better performance logic you suggest ?
    THANKS IN ADVANCE.

    ok this is very funny so if someone gets a good way to code he should wait till he gets 1198 points till he write a performance wiki
    so that means ppl who has high SDN points only can write wiki
    for your information wiki definition is
    [http://en.wikipedia.org/wiki/Wiki |http://en.wikipedia.org/wiki/Wiki]
    its all about contribution and sharing.
    did you try that code on a production or a Quality server. If you did you wont say that coz the results i have shown in that blog is what i my self tested on a Quality system of our client.
    and for your information i did my internship at a SAP AFS consultancy firm and i created the account at that time. I have joined that company and now working as a developer over there.
    if you have worked on a client system development on SD and MM you will know that most of the time
    we use header and item tables like
    likp,lips
    vbak,vbap
    vbrk,vbrp
    most of the time we come across nested loops with smiler kind of condition.
    in this Q he has MATNR as reference.
    if you see it properly you can see both tables are sorted.
    and the select statement is for all entries.
    for your information there can be a delivery document item with out a header if you are aware of DB concepts in that case there will be a foreign key error.
    ok lets think about a situation like that even in that case if there ant any header data then simply the client wont request for that record.( you would know if you have worked with clients )
    last but not least i dont care about my points rate at SDN i just wanted to share what i know coz anyway i have a very good job here. dont try to put down ppl just because they are new.
    Thomas Zloch  : i never told its my code i saw it some where then i checked and i bogged it so that i can get it when i want and i saw it in se30 ( its not se38 ) but i know most of ABAP developers dont check it much so i just wanted to help.
    Rui Pedro Dantas   : ya your correct we dont need to use it most of the time since sorted table is easy but there are programs which works with bulky data load we can use it in places like that. Thanks for talking the truth
    Nafran
    sorry if i said anything to hurt anyone.

  • Performance Point Service Filter Reset on button click?

    Hi.
    Is it possible to reset the PPS filter on button click (Common "Reset" button).
    Is it possible to do by using Javascript?
    I need some work around to achieve it.
    Thanks & Regards
    Poomani Sankaran

    Hi,
    According to your post, my understanding is that you want to reset Performance Point Service Filter on button click.
    You can follow the article
    SharePoint Dashboard with Common Filters using PerformancePoint Services 2010 to create a dashboard with common filters using SharePoint PerformacePoint services 2010 and SQL Server Analysis Services (SSAS) Data Source.
    Then you will see the filters on top of your dashboard. You can change the filters and see that both the reports are rendering the data as per your filter choice.
    If you'd like to default the filter to a particular value on page load, you might want to try a JavaScript based approach:
    Define the function 'NotifyBrowserOfAsyncUpdate(elem)', which PPS will call after any PPS render request completes (i.e., when your filter renders).
    Within the function, check that elem.control exists and then check that elem.control.webPartTitle equals your filter's title.
    If it does, search elem (the filter web part's main container DIV) for the actual filter control and change the value. If you're using a dropdown, this is pretty simple - just remember to call the 'onchange' method on the select as well so that connected
    PPS objects refresh.
    Here is a similar thread for your reference:
    http://sharepoint.stackexchange.com/questions/61939/default-pps-filter-for-pps-dashboard-in-sharepoint-2010-not-showing
    Best Regards,
    Linda Li
    Linda Li
    TechNet Community Support

Maybe you are looking for