Performance Point Services Data Source Sql Server Table.

Hi,
I have a requirement where I need to create a matrix report which shows resource capacity and deomand fromProject Server 2010 databases which involves joining of multiple tables and views from the database.
But in Performance Point Service 2010 I am not able to write query to join multiple tables it'sjust showing one option in Data Connection to create connection to a database and specific one table not like SSRS where we can connect to database and can query
multiple tables and views to render the report as required.
So can any body suggest in case I am missing something in PPS 2010 or we cant joing or write sql query in PPS 2010
Thanks

I would join the tables in a view as Regis suggested or simply pull the information into a PowerPivot model and use that in SharePoint to connect to in PPS.  If you go that route then you will be able to use a multidimensional source and can create
Analytical Reports.
http://denglishbi.wordpress.com/2011/01/03/using-powerpivot-with-performancepoint-services-pps-2010/
http://denglishbi.wordpress.com/2012/02/09/using-performancepoint-services-pps-with-powerpivot-sql-server-2012-rc0/

Similar Messages

  • Performance Point Services data source pointing towards wrong datasource after production instance copied in development

    Hi
    We have copied our Project Server 2010 production databases to development server. We have few reports created by using Performance Point Services, but here when we try to open the reports the data source is  pointing towards production data source
    instead of dev.
    We are getting some issue while executing the reports
    Request you to let me know what exactly the cause and steps to rectify the issue.
    Thanks 
    Geeth If you feel that the answer which i gave you is Helpful please select it as Answer/helpful.

    As you have copied data from Prod to Dev reports are pointing to production data sources and reports are unable to display data. Whenever we copy the data from one environment to other then this kind if issue occurs.
    We need to modify the data source of all the reports manually so that report point to dev data source.
    You have to open data source for your reports and change it from prod to dev then reports will display correct data.
    http://blogs.msdn.com/b/performancepoint/archive/2011/09/19/performancepoint-data-connection-libraries-and-content-lists-in-sharepoint.aspx
    http://www.networkworld.com/community/node/57687
    kirtesh

  • Error when insert data in Sql Server table(DateTime data type)

    Hello all,
    I have created a database link in oracle 11g to SQL Server 2008 using Sqlserver gateway for oracle,Oracle run on Linux and SQL Server run on Windows platform.
    I have queried a table and it fetches rows from the target table.
    I am using this syntax for insert a row in Sql Server table.
    Insert into Prod@sqlserver (NUMITEMCODE, NUMPREOPENSTOCK, NUMQNTY, NUMNEWOPENSTOCK, DATPRODDATE , TXTCOMPANYCODE, "bolstatus", NUMRESQNTY )
    Values (1118 , 1390.0 , 100.0 ,1490 , '2012-06-23 12:37:58.000','SFP' ,0 , 0 );
    but it give me error on DATPRODDATE,The data type of DATPRODDATE column in Sql Server is DATETIME.
    My Question is how can i pass the date values in INSERT statement for Sql Server DateTime data type.
    Regards

    Just as with Oracle, you have to specify the date using the to_date() function or use the native date format for the target database (if you can figure out what that is). This is good practice anyway and a good habit to get into.

  • Upload csv file data to sql server tables

    Hi all,
    I want clients to upload csv file from their machines to the server.
    Then the program should read all the data from the csv file and do a bulk insert into the SQL Server tables.
    Please help me of how to go about doing this.
    Thanx in advance.....

    1) Use a multipart form with input type="file" to let the client choose a file.
    2) Get the binary stream and put it in a BufferedReader.
    3) Read each line and map it to a DTO and add each DTO to a list.
    4) Persist the list of DTO's.
    Helpful links:
    1) http://www.google.com/search?q=jsp+upload+file
    2) http://www.google.com/search?q=java+io+tutorial
    3) http://www.google.com/search?q=java+bufferedreader+readline
    4) http://www.google.com/search?q=jdbc+tutorial and http://www.google.com/search?q=sql+tutorial

  • Need to load the data from oledb source(sql server table to ) ODBC destination table. )

    I have around 700,000 records that is needed to be moved from a table that exists in my sql server 2012 database table to the DB3 database table. I am using currently oledb source and ado.net as destination to do this
    transfer. But its taking more than 2 hrs, I need to get this done fast, there should be a way. but I am not able to get it done faster can any one help please....
    Thank you 

    I suspect you are talking about DB2 database. In that case I would recommend you check the commercial COZYROC
    DB2 Destination component. It is 20x faster compared to the standard RBAR insertion.
    SSIS Tasks Components Scripts Services | http://www.cozyroc.com/

  • Fetch the data from sql server table to array

    In the following script i am fetching the servers details from text file. Please anyone help me to get the same information from sql server database table. with using this query
    "SELECT DISTINCT [server_name]
    FROM
    Servers] where
    Status='1'"
    $ServerName =Get-Content "c:\servers\servers.txt"
     foreach ($Server in $ServerName) {
         if (test-Connection -ComputerName $Server -Count 4 -Delay 2 -Quiet ) {
           write-output "$Server is alive and Pinging `n"
           } else {
    Write-output "TXUE $Server seems dead not pinging"

    i have tested it is not working..
    =@"
    SELECT DISTINCT [server_name] FROM Servers] where Status='1'
    $connection
    =new-objectsystem.data.sqlclient.sqlconnection(
    "Data Source=xxx;Initial Catalog=xxx;Integrated Security=SSPI;”)
    $adapter
    =new-objectsystem.data.sqlclient.sqldataadapter($query,
    $connection)
    $table
    =new-objectsystem.data.datatable
    $adapter
    .Fill($table)
    | out-null
    $compArray
    =@($table)
    ##### Script Starts Here ######
    foreach($Serverin$ServerName)
    if(test-Connection-ComputerName$Server-Count4
    -Delay2
    -Quiet)
    write-output"$Server
    is alive and Pinging `n"
    else{
    $query

  • Data source - SQL server in company LAN

    Hi,
    I am using the trial version of PowerBI. I need to use SQL server as source which is available in my company LAN. If I create datasource providing the server name, will that provide access to PowerBI ? Is that related to IP  (no direct IP, thru company
    virtual LAN) ?
    Thanks,
    Paddy

    Scheduled Refresh FAQs
    Hi Paddy,
    You author the reports locally the same way you did before, i.e., connect through the PowerPivot tools to the local database or use Power Query to do that.
    When you upload the workbook to Power BI, you can configure schedule refresh on it, HOWEVER, for the schedule refresh to work properly, a Gateway and a Datasource need to be configured in the Power BI Admin Center. This step also includes downloading an
    agent that is installed on-premises and allows the data to be transferred upon refresh.
    Here are some useful links:
    Power BI Admin Center Help
    Scheduled Data Refresh for Power BI
    GALROY

  • Need to delete specific Months Data from SQL Server Table

    Greetings Everyone,
    So i have one table which contains 5 years old data, now business wants to keep just one year old data and data from qurter months i.e. (jan, mar, June, sep and December), i need to do this in stored procedure. how i can achive this using month lookup table.
    Thank you in advance
    R

    Hi Devin,
    In a production environment, you should be double cautious about the data. I have no idea why you’re about to remove the data just years old. In one of the applications I used to support, the data retention policy is like to keep raw data for latest month
    and the elder data would get rollup as max, min, average and so on to store in another table. That’s a good example for data retention.
    In your case I still suggest you keep the elder data in another table. If the data size is so huge that violates  your storage threshold, get the data rollup and store the aggregated would be a good option.
    Anyway if you don’t care about the elder data, you can just delete them with code like below.
    DELETE
    FROM yourTable
    WHERE YEAR(dateColumn) < YEAR(CURRENT_TIMESTAMP) OR (MONTH(dateColumn) not in (1,3,6,9,12) AND YEAR(dateColumn) = YEAR(CURRENT_TIMESTAMP))
    In some cases to remove data from very large table, DELETE performs bad. TRUNCATE would be a better option which works faster. Read more by clicking
    here. In your case, if necessary, you can reference the below draft code.
    SELECT * INTO tableTemp FROM yourTable WHERE YEAR(dateColumn) = YEAR(CURRENT_TIMESTAMP) AND MONTH(dateColumn) IN(1,3,6,9,12)
    TRUNCATE yourTable;
    INSERT INTO yourTable SELECT * FROM tableTemp
    As you mentioned, you need to do the deletion in Stored Procedure(SP). Can you post your table DDL with sample data and specify your requirement details so that I can help to compose your SP.
    If you have any question, feel free to let me know.
    Best regards,
    Eric Zhang

  • Import into SQL Server db from Sybase 7 data source - SQL Server 2008 R2/2012

    Hi,
    I need to import Sybase 7 data into a SQL Server 2008 R2 or 2012 database.
    Has anyone any experiences about this import?
    Thanks

    The nuance is in basically setting your connection manager right.
    If I remember correctly, I did that using the ODBC/ADO, but I do not remember the Sybase version.
    I suggest you visit a post dedicated to this operation (the author chose to use the ADO.NET type of a provider):
    http://msbimentalist.wordpress.com/2013/11/01/import-and-export-from-sql-server-to-sybase-db-using-ssis/ 
    Arthur My Blog

  • How to create a sharepoint list to add/change/delete the data in SQL server Table based on users inputs

    I have a table in sql with employee_num and I need to create a list and link that list to this table to make changes to table based on values user enter or selects.

    Hi,
    In addition, you could refer to one similar thread for related information:
    http://social.technet.microsoft.com/Forums/sharepoint/en-US/8ee8a7b2-ddfc-4654-b84e-b062aeb527ae/how-to-create-exernal-list-in-sharepoint-which-fetch-data-from-multiple-sql-table?forum=sharepointgeneral
    Regards,
    Rebecca Tu
    TechNet Community Support

  • How to select data from Sql server 2005 database tableinto oracle database table

    Hi,
    I have table text1 in sql server database and text2 in oracle database (11g). Now how to move data from SQL Server table into oracle table. So please help me how to do it.
    Thanks a lot in advance.
    rk
    OS: Windows 7 professional

    Hi,
    you can use export/import wizard and specify sql server as a source and oracle as destination.
    I hope this is helpful.
    Please Mark it as Answered if it answered your question
    OR mark it as Helpful if it help you to solve your problem
    Elmozamil Elamir Hamid
    MCSE Data Platform
    MCITP: SQL Server 2008 Administration/Development
    MCSA SQL Server 2012
    MCTS: SQL Server Administration/Development
    MyBlog

  • Extract the data from SQL Server and Import into Oracle

    Hi,
    I would like to run a daily job that will export the table data from SQL server table (it will be only one or two table) and Import back into Oracle table (it might one or two table tables).
    Could you please guide me that how can i do this using either sql server or oracle?
    We have oracle 9.2 and sql server 2005.
    Normally i do from flat file which is generated by source destination nand i dump into oracle using sql*loader but this time I have to directly extract/export the data from MS Sql server and load into Oracle table, mostly it will reload so i might doing any massaging data during the load.
    If you show me the detail approach, it will be really appreciated.
    I have access to Sql server but i don't how to use sql server to do this or using oracle as a daily job even becuase have to schedule the job for this as it will be a daily job.
    Thanks,
    poratips

    Unless you can find an open source ODBC driver for SQL Server that runs on Solaris (and I wouldn't be overly hopeful there) Heterogeneous Services would require that you license something-- a third party ODBC driver, a new Oracle instance, or an Oracle Transparent Gateway.
    As I stated below, you could certainly use SQL Server's ETL tool, DTS. Oracle's ETL tools would require additional licensing since you're just on 9.2. You could also write a small application (Java or otherwise) that connected to both databases and transferred the data. If you're particularly enterprising, you could load the SQL Server Type 4 JDBC driver into Oracle's JVM and write a Java stored procedure that connected to the SQL Server database via JDBC, but that's a pretty convoluted approach.
    Justin

  • Export SQL Server Table into Multiple Sheets in Excel

    I'm trying to understand the example here.
    http://visakhm.blogspot.in/2013/09/exporting-sqlserver-data-to-multiple.html
    Basically, I'm up to step #5.
    5. Data Flow Task to populate the created sheet with data for that subject. The Data Flow looks like below
    I don't understand this part.  Has anyone worked with this sample before?  Has anyone gotten this to work?  I'm trying to learn SSIS better, but I'm finding it hard to get started with this stuff.  I guess if I get a couple projects under
    my belt, I'll be fine.  The hardest part is getting started.
    If anyone feels really ambitions today, maybe you can assist me with two other projects as well.
    #1)
    http://visakhm.blogspot.in/2011/12/simulating-file-watcher-task-in-ssis.html
    #2)
    http://sqlage.blogspot.in/2013/12/ssis-read-multiple-sheets-from-excel.html
    http://beyondrelational.com/modules/24/syndicated/398/Posts/18163/ssis-how-to-loop-through-multiple-excel-sheets-and-load-them-into-a-sql-table.aspx
    I'd greatly appreciate any help I can get with this.
    Thanks!!
    Knowledge is the only thing that I can give you, and still retain, and we are both better off for it.

    Hi Ryguy72,
    The solution introduced in Visakh’s blog does work. I have written the detailed steps for you:
    1. Create an Excel file (Des.xls) as the destination file, rename the Sheet1 to “Excel Destination”, and add four columns: ID, Name, Subject, and Marks.
    2. Create an Object type variable Subject, a String type variable SheetName, and a String type variable WorkSheetQuery.
    3. Set the value of variable SheetName to “Excel Destination$”, and set the expression of variable WorkSheetQuery to:
    "CREATE TABLE `" + @[User::SheetName] + "` (
    `ID` Long,
    `Name` LongText,
    `Subject` LongText,
    `Marks` Long
    4. In the Execute SQL Task outside the Foreach Loop Container, set its General tab as follows:
    ResultSet: Full result set
    ConnectionType: OLE DB
    Connection: LocalHost.TestDB (e.g. an OLE DB Connection to the source SQL Server table)
    SQLSourceType: Direct input
    SQLStatement: SELECT DISTINCT [Subject] FROM [TestDB].[dbo].[DynamicExcelSheetDemo]
    5. In the Result Set tab of this Execute SQL Task, map result “0” to variable “User::Subject”.
    6. In the Collection page of the Foreach Loop Container, set the Enumerator option to “Foreach ADO Enumerator”, and set the “ADO object source variable” to “User::Subject”.
    7. In the Variable Mapping page of the container, map variable “User::SheetName” to index “0”.
    8. Create an OLE DB Connection Manager to connect to the destination Excel file. Choose the provider as Native OLE DB\Microsoft Jet 4.0 OLE DB Provider, specify the fully qualified path of the Excel file (such as C:\Temp\Des.xls), and set the Extended Properties
    option of the Connection Manager to “Excel 8.0”. Click “Test Connection” button to make sure the connection is established successfully.
    9. Set the General page of the Execute SQL Task inside the container as follows:
    ResultSet: None
    ConnectionType: OLE DB
    Connection: Des (e.g. the OLE DB Connection to the destination Excel file we create above)
    SQLSourceType: Variable
    SQLStatement: User::WorkSheetQuery
    10. In the Data Flow Task, add an OLE DB Source component, set the connection manager to “LocalHost.TestDB”, set “Data access mode” to “SQL command”, and set the “SQL command text” to:
    SELECT * FROM [TestDB].[dbo].[DynamicExcelSheetDemo] WHERE [Subject]=?
    11. Click the “Parameters…” button, and map Parameter0 to variable “User::SheetName”.
    12. Add an Excel Destination component, setup an Excel Connection Manager to the destination Excel file, set the data access mode to “Table name or view name variable”, and select variable name as “User::SheetName”. In this way, the OLE DB Provider can get
    the Excel metadata and enable us to finish the column mappings correctly.
    13. Since the Microsoft Jet 4.0 has only 32-bit driver, please make sure to set the Run64BitRuntime property of the IS project to False.
    In addition, please note that there will be one useless worksheets “Excel Destination” in the Excel file. We can remove them manually or use a Script Task to delete this worksheet programmatically (need to install .NET Programmability Support feature during
    Office installing).
    Regards,
    Mike Yin
    TechNet Community Support

  • Create material Master by connecting SAP Tables to SQL server tables

    We would like to use data in SQL server tables to upload material master into ECC. We have a number of rules embedded into the SQL server tables and would like that the LSMW program should pick up records directly from the SQL table without passing through a flat file format. Can anyone guide us how to accomplish this. We are on SAP AFS ECC 5.0 Thanks in advance

    Is anyone having any ideas on this ...?

  • Incomplete Crawl of SQL Server Table Source

    We are currently experiencing a problem with a Database-based crawler in SES 10.1.8.2 where scheduled crawls produce different results than crawls that are executed manually. Crawler configuration is as follows:
    - Database Source Type pointing at a rollup table in SQL Server.
    - Rollup table is deleted and re-created each evening prior to SES indexing. Rollup table includes a "KEY" column that is the primary key.
    - Schedule properties include a recrawl policy of "Process All Documents" and a crawling mode of "Automatically Accept All URLs for Indexing"
    The symptom we are experiencing basically involves an incomplete crawl of the source. For example, a source table that contains 1,734 rows only translates into 567 indexed documents in SES. This coincides with the Browse counts when using the out-of-the-box version of SES search. If I then manually launch the schedule, the proper number of documents are processed and the Browse feature reflects the proper number.
    One other piece of information is after examining the log for a scheduled crawl, a document that was not indexed does appear in the log as being queued, but we never see a log info line indicating it was processed.
    The bottom line is a scheduled crawl fails to index all database table rows. A manual crawl consistently indexes everything properly.
    Finally, we have made sure that we have minimal overlap of scheduled crawls in case this is a resource issue.
    Any assistance would be greatly appreciated.

    1. What is the use of the table compression?
    Save disk space and sometimes also gain performance.
    2. when do we need to compress the table ?
    Need is maybe not the best word, but when we want to reduce disk space and/or make the performance gains, we would consider compression.
    And, not to forget, if we are on Enterprise Edition. Compression is not available in other editions.
    3. If i compress the table what will be the performance impact
    There are two levels of compression: ROW and PAGE. ROW is basically a different storage format, which gives a more compact format for most data profiles. Not the least if you have plenty of fixed-length columns that are often NULL. ROW compression has a
    fairly low CPU overhead. Since compression means that the data takes up less space, this means a scan of the full table will be faster. This is why you may gain performance.
    Page compression is more aggressive and uses a dictionary. You can make a bigger gain in disk space, but the CPU overhead is fairly considerable, so it is less likely that you will make a net gain.
    To find out how your system is affected, there is a stored procedure, of which I don't recall the name right now, which can give you estimated space savings. But if you also want to see the performance effects, you will need to run a test with your workload.
    There is also columnstore, which also is a form a compression, and which for data warehouses can give enormous performance gains.
    Erland Sommarskog, SQL Server MVP, [email protected]

Maybe you are looking for