Custom sort pivot table columns with Essbase as the data source

Is it possible to sort columns in a pivot table according to an arbitrary value that I define when the data is coming from Essbase?
For example, say I have a dimension called Soda, with values Coke, Diet Coke, Dr. Pepper and Diet Dr. Pepper. I create a report with a sales measure with the measure labels on the rows and the Soda dimension on the column. By default the columns will be sorted alphabetically:
Coke Diet Coke Diet Dr. Pepper Dr. Pepper
Sales 1M .5M .75M 1.25M
I want to create a report that looks like this:
Coke Diet Coke Diet Dr. Pepper Dr. Pepper
Sales
I think I could do this if the source was relational just by creating bins or creating a custom column with a case statement that assigns each Soda an arbitrary value and then sort on this value. Everything I've tried with Essbase as the source, though, results in:
State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 42043] An external aggregate is found in an outer query block. (HY000)
Any ideas?

Hi,
1. You can try solve the 'An external aggregate is found in an outer query block' by changing aggregation rule for your measure both in physical and business layer.
By default it's set to Aggr_External - change it to Sum
In physical : Column properties->Aggregation rule
In business model : Column properties->Aggregation tab -> Default aggregation rule.
This may change the result - after changing check whether you still get correct values.
2. Also, in case the desired order is the same as the order of members in the Essbase cube, and you want to leave Aggr_External, you can create a calculated column that will help you with the sort.
See http://oraclebizint.wordpress.com/2008/04/28/oracle-bi-ee-101332-handling-sort-order-in-hyperion-essbase-931-evaluate-and-mdx/
Hope this helps,
Alex

Similar Messages

  • Create a logical column with more than one data source

    I'm having a problem to create a logical column with more than one data source in Siebel 7.8.
    What I want to do is the union of 2 physical tables in one logical table.
    For example, I have a "local_clients" table and a "abroad_clients" table. What I want is to have a logical table "clients" with the client data from the 2 tables.
    What I've tried is dragging the datasources I need onto the logical column.
    However this isn't working because it only retrieves the data from the first data source.

    Hi!
    I think it is not possible to do this just by dragging the columns to the logical table. A logical table can have more than one source, but I think each column must have just one direct source column.
    I'm not sure, but maybe you should do the UNION SQL to get the data of the two tables. In the physical layer, when you create a new physical table, it's possible to set the "table type" as a "SELECT". I didn't try that, but it seems that it's possible to have the union table in the physical layer.
    Bye.
    Message was edited by:
    user578388

  • IDOC order of columns not same as the data source

    Hi,
    I am from non SAP background. We are using Informatica to pull data from SAP ECC using the Business Content for Integration by pulling data from IDOCS. Here is my problem:
    1) We identified a particular data source (0FI_GL_4) for full mode data pull using Informatica. However, during extraction we found that the order of ports (columns) in the datasource and that generated in the IDOC are not the same. As a result, the loads are failing due to data conversion or mismatch errors.
    Question is,how do we ensure that the order of columns in the IDOC generated is the same as that in 0FI_GL_4?
    Thanks,
    R.

    Hi,
    Please find the below link may useful.
    http://wiki.ittoolbox.com/index.php/Re-Connect_R/3_and_BW
    Reg,
    Venkat

  • Increasing the width of Pivot table Columns

    Hi Gurus,
    Small pivot table --> ( ie having less no of columns which generally can be seen in a single page without scrolling to the right)
    If i have a small pivot table , then in the column properties through additional formatting options i can increase the width of an individual column.
    However if the pivot table is Big,I am not able to increase the width of the individual column .
    I tried setting width in the additional formatting options in the Content Properties( just above Rows and below Section there is a small grey box containing finger).
    Now the Content Properties indeed increases the width of the column, but it also increase the width of the whole Pivot Table.
    See...If i have a fixed number of columns in a PIVOT Table that do not change with prompt, then this solution work fine...
    but the problem comes when the no of columns in pivot Table reduces based on the prompt selected.
    In this case the less no of columns tries to occupy the whole width set in the content properties.
    In short, with Content Properties the width of the PIVOT Table becomes fixed, irrespective of the no of columns coming.
    Also after setting content properties, I am not able to print the report to PDF.
    So the Question is... Can we increase the width of a Big Pivot Table whose no of columns keeps on changing based on the prompt....
    Big pivot table -->
    (ie table having no of columns which exceeds a single window frame and have to scroll to right to see the rest of the columns)
    I hope i have not made things messy...
    Thanks
    Ashish

    Ashish,
    Yes, you can set a fixed size for these. I'm sure you are fixing these because of PDF issue which might be irregular in sizes. Just play arnd with Custom CSS options.

  • Feature Request | Allow custom metadata per table/column to be stored

    Someone please correct me if there's already a feature that allows this...
    I'd like to see a feature where you can define a set of metadata that can be stored per table / column, and perhaps a trigger that updates that metadata.
    As a use case, it is sometimes necessary to find out how many records exist in a table, and the typical way of doing this is by running a statement like select count(*) from example_table;. With a large table, this statement might take a long time though, and certainly has overhead associated with it. If this is something that's done on a regular basis, like maybe even once every minute, wouldn't it be much better to store this number as metadata for the table that can be updated on inserts and deletes and then can be queried? It might involve extra overhead on an insert or delete statement to add to or subtract from this number, but then for some applications the benefit of getting the count quickly might outweigh the extra overhead.
    Another use case is finding a minimum or maximum out of a table. Say you store a date and you need to find the max value for some feature in your application; with a large table, and especially if its a date with accuracy to the millisecond where an index wouldn't help much because most values are unique, it can take quite a bit of time and overhead to find that max value. If you could define for that column that you'd like to store the max value in metadata, and could query it, it would be very quick to get the info. The added overhead in this scenario would be on insert, update or especially on delete, the value would have to be updated. But in some applications, if you don't expect alot of deletes or updates on this column, it might be worth the added overhead to be able to quickly find the max value for this column.
    I know you could probably make a separate table to store such info, and write triggers to keep it up to date, but why not have a built in feature in Oracle that manages it all for you? When you create a table, you could define with the column definition something like 'METADATA MAX' and it will store the max value of that column in metadata for you, etc.
    I know that the overhead of this feature wouldn't be good for most circumstances, but there certainly are those cases where it would be hugely beneficial and the overhead wouldn't matter so much.
    Any thoughts?
    Can this be submitted as a feature request? Am I asking in the right place?
    (p.s. while you're at it, make a feature to mimic IDENTITY columns from SQL Server!)

    I don't think what you mentioned is exactly what I was talking about. There's no min_value or max_value in the dba_tab_columns table; there's only high_value and low_value, and they are stored in binary. And I believe to be accurate in the use cases that I suggested, you would have to analyze the table after every insert/update/delete. So no, that's not the same feature I've asked for, although I appreciate the feedback.
    Also, the num_rows in dba_tables relies on the table being analyzed too, so for a table that stores temporary date to be processed where you want to know the size of the queue every few seconds, it wouldn't make sense to analyze the whole table every few seconds when all you want is a count of the records, and it's also inefficient to use the COUNT function with every query when it would be much faster to store the count in some metadata form that is updated with every insert or delete (adding to a count and subtracting from a count with each insert/delete is WAY faster than analyzing the table and letting it literally recount the entire table every time).
    So again, while I appreciate the feedback, I don't think what you mentioned addresses either of the use cases I gave. I'm talking about a different kind of user defined metadata that could be stored per table/column with rules to govern how it is updated. Not you standard metadata that requires an analyze and isn't real time. I also only gave a few use cases, but the feature I'm really looking for is the ability for users to define many different types of custom metadata even maybe based on their own logic.
    Again, this feature could be implemented right now by creating a USERMETADATA table for every standard table you have, and then using triggers to populate the info you want at the table level and column level, but why do that when it could be built in?
    Also, I don't really agree that having to create a trigger/sequence for every table instead of setting a column as IDENTITY is better. It's cumbersome. Why not build these commonly used features in? It can create a trigger/sequence behind the scenes for all I care, but why not at least let someone mark a column as IDENTITY (or use whatever other term you want) at the time of table creation and let it do everything for them. But that's off-topic; I meant it for more of a side comment, but should really have a separate post about it.

  • Move Table Column with AppleScript in Microsoft Word

    Microsoft Word has a flaw (in my opinion) with tables in that it aligns the left and right text with the margins rather than aligning the table columns with the margins. This results in sloppy tables, because the left and right borderlines lie outside the margins.
    I would like to fix the word tables by
    calculating the left cell padding and right cell padding in points and setting them to variables {left_pad,right_pad} respectively
    move left column by left_pad to the right
    move right column by right_pad to the left
    The script I was working on does not work, but I will post it to show my thought process as I hone in on my solution.
    tell application "Microsoft Word"
        --595 points is width of A4 paper
        -- Set page margin in points to variables
        set {l_margin, r_margin, t_margin, b_margin} to {(get left margin of page setup of active document), get (right margin of page setup of active document), get (top margin of page setup of active document), get (bottom margin of page setup of active document)}
        get {l_margin, r_margin, t_margin, b_margin}
        -- Set specific Paragraph margins
        -- NOTE: If you select a table thinking you wish to drag just the left margin to the right, or the right margin to the left, this code does not accomplish this because each cell has its own paragraph formatting. This code will set the margin for every single cell, because each cell has its own margins! (separate from padding).
        set para_sel to paragraph format of selection
        set paragraph format left indent of para_sel to (centimeters to points centimeters 0.5)
        -- Aligning left and right columns of table with the margins
        -- NOTE: There is a command to set left row indent, but not right row indent (very stupid of Microsoft)
    end tell

    I have worked up something that seems to work (although I cannot promise it is the best way). Hope it helps anyone else who has this need.
    tell application "Microsoft Word"
    activate
    set findRange to find object of selection
    clear formatting findRange -- clear any previous formatting used in a find operation
    set forward of findRange to true -- find forward
    set style of findRange to "List Bullet" -- the style to look for
    tell findRange
    set gotIt to execute find find text "" -- do the search w/o matching any text
    end tell
    if gotIt is true then -- if a match was found
    copy object selection -- copy it to the clipboard
    set mySelection to (the clipboard) -- then put clipboard into a variable
    set myOffset to ¬
    (get selection information selection information type ¬
    (horizontal position relative to page)) -- now put selection info into a variable
    display dialog mySelection & return & (myOffset as text) -- then display it
    end if
    end tell

  • Hiding Table Columns with the Spry Element Selector

    I am trying to set up a toggle button that will show/hide
    rows >1 when clicked. I've used Adobe's
    "Hiding
    Table Columns with the Spry Element Selector" example and it
    worked fine with an HTML list, until I linked to actual XML data.
    Now it works in reverse. What gives?
    Here's the example:
    http://a44.awardspace.com/testing/toggleShowHideRows.htm

    That's what I started with. Same result:
    http://a44.awardspace.com/testing/toggleShowHideRows.htm

  • I have a one-column table in pages.  Each cell has text and a number separated by a colon.  Can I automatically make it two column with everything to the right of the colon in the second column?

    I have a one-column table in pages.  Each cell has text and a number separated by a colon.  Can I automatically make it two column with everything to the right of the colon in the second column?

    Here's another way that is pretty quick to do.
    Formula in Column B is:
    =LEFT(A, FIND(":", A))
    Formula in Column C is:
    =RIGHT(A, LEN(A)-FIND(":", A))
    You can eliminate the colon from the result in column B by writing:
    =LEFT(A, FIND(":", A)-1)
    Once you do the conversion, you should freeze the result by Selecting columns B and C and then Command-C, Edit > Paste Values.
    Regards,
    Jerry

  • How to select column dynamically with sharepoint list as data source in ssrs report

    Hi all,
    I am creating reports from SharePoint list but i have requirements to select the column name dynamically with SharePoint list as data source. I didn't find any way of doing this.. 
    Can anyone help me to resolve this issue..
    There is no way of specifying column name dynamically here in data set query
    <RSSharePointList xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema">
      <ListName>test list</ListName>
      <ViewFields>
        <FieldRef Name="ID" />
        <FieldRef Name="ContentType" />
        <FieldRef Name="Title" />
        <FieldRef Name="Modified" />
        <FieldRef Name="Created" />
        <FieldRef Name="Author" />
        <FieldRef Name="Editor" />
        <FieldRef Name="_UIVersionString" />
        <FieldRef Name="Attachments" />
        <FieldRef Name="Edit" />
        <FieldRef Name="LinkTitleNoMenu" />
        <FieldRef Name="LinkTitle" />
        <FieldRef Name="DocIcon" />
        <FieldRef Name="ItemChildCount" />
        <FieldRef Name="FolderChildCount" />
        <FieldRef Name="test_x0020_date" />
        <FieldRef Name="title2" />
      </ViewFields>
    </RSSharePointList>

    Hi MNRSPDev,
    Sorry for the delay.
    According to the current description, I understand that you want to specify column name in dataset query designer dynamically when using SharePoint list data source.
    Based on my research, this is not supported by default. As a workaround, you can use XML data source. The XML content can be embedded directly within the query. This lets you use the expression capabilities within the processing engine to build queries and
    data dynamically within the report. And it can be used for retrieving XML data directly from an external data source, passing it using parameters, and embedding it within the query.
    Reference:
    http://www.codeproject.com/Articles/56817/Dynamic-Reports-with-Reporting-Services
    Hope this helps.
    Regards,
    Heidi Duan
    Heidi Duan
    TechNet Community Support

  • Analyse big data in Excel? Why the dynamic tables doesn't take all the data from the source table.

    Hi,
    I'm doing a internship in a production line.
    My job is to recover production data (input data) and test data (output data) using various types of software (excel, BusinessObject sap, etc).
    To this day, I have recovered hundreds of production data, and have also organized in excel but I need to analyze and plot them.
    I would like to know who can give me an idea of ​​how I could plot as much data and analysis.
    Now i trying to use dynamic charts and plot some data but I did not get acceptable answers.
    How could I compare, analyze and graph for example:
    Five columns of production (input) with five (5) columns tested (data output).
    After graphing.
    Someone can give me a technique to analyze data? ie I compare column by column?
    or some other technique? as a conglomerate could analyze data?
    o give you an idea of ​​the contect, now I perform an internship in a manufacturing turbines.
    My job is to analyze the input data (production) and to estimate the possible behavior of the turbines in the tests.
    As I said, use dynamic tables in excel, but i have not idea why the dynamic tables doesn't  take all the data from the source table.
    I appreciate your advice
    Thanks

    You can declare as PT source whole Columns [$A:$E], without rows number.
    Then You'll have all actually data.
    Oskar Shon, Office System MVP - www.VBATools.pl
    if Helpful; Answer when a problem solved

  • Custom SharePoint 2010 designer page throws "The data source control failed to execute the insert command" exception while adding the new item after the August 13, 2013 CU has installed

    We have the SharePoint Server 2010 with SP1 environment on which the custom SP2010 designer pages were working as expected before the
    August 13, 2013 CU has installed. But, getting the below exception while trying to add the new item after the CU has installed.
    Error while executing web part: System.NullReferenceException: Object reference not set to an instance of an object.     at Microsoft.SharePoint.WebControls.SPDataSourceView.ExecuteInsert(IDictionary values)     at
    System.Web.UI.DataSourceView.Insert(IDictionary values, DataSourceViewOperationCallback callback) 3b64c3a0-48f3-4d4a-af54-d0a2fc4553cc
    06/19/2014 16:49:37.65  w3wp.exe (0x1240)                        0x1300 SharePoint Foundation        
     Runtime                        tkau Unexpected Microsoft.SharePoint.WebPartPages.DataFormWebPartException: The data source control
    failed to execute the insert command. 3b64c3a0-48f3-4d4a-af54-d0a2fc4553cc    at Microsoft.SharePoint.WebPartPages.DataFormWebPart.InsertCallback(Int32 affectedRecords, Exception ex)     at System.Web.UI.DataSourceView.Insert(IDictionary
    values, DataSourceViewOperationCallback callback)     at Microsoft.SharePoint.WebPartPages.DataFormWebPart.FlatCommit()     at Microsoft.SharePoint.WebPartPages.DataFormWebPart.HandleOnSave(Object sender, EventArgs e)    
    at Microsoft.SharePoint.WebPartPages.DataFormWebPart.RaisePostBackEvent(String eventArgument)     at System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument)     at System.Web.UI.Page.ProcessRequestMain(Boolean
    inclu... 3b64c3a0-48f3-4d4a-af54-d0a2fc4553cc
    06/19/2014 16:49:37.65* w3wp.exe (0x1240)                        0x1300 SharePoint Foundation        
     Runtime                        tkau Unexpected ...deStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) 3b64c3a0-48f3-4d4a-af54-d0a2fc4553cc
    I have tried changing the "DataSourceMode" as below, now the insert command is working, but update command is not working.
    <SharePoint:SPDataSource runat="server" DataSourceMode="ListItem" />
    Also, the lookup dropdown fields are displaying the value as "<a href="Daughterhttp://cpsp10/sites/Employees/_layouts/listform.aspx?PageType=4&ListId={8F62F444-FB6A-4F03-9522-C4696B45DCD1}&ID=10&RootFolder=*">Daughter</a>"
    instead of only "Daughter".
    Please provide the solution to get rid of this issue.
    Thanks
    Ramasubbu

    Try below:
    http://social.technet.microsoft.com/Forums/en-US/ae910269-3a0c-4506-844b-e8bc89d95b71/data-source-control-failed-to-execute-the-insert-command
    http://blog.jussipalo.com/2012/01/sharepoint-2010-data-source-control.html
    While there can be many causes for this generic error message, in my case the first parameter or ddwrt:DataBind function inside the SharePoint:FormFields element was
    'i' and I was working with an Edit Form. Changing it to
    'u' as it was with every other FormField fixed the issue.
    <SharePoint:FormField runat="server" id="ff1{$Pos}" ControlMode="Edit" FieldName="Esittaja" __designer:bind="{ddwrt:DataBind('u',concat('ff1',$Pos),'Value','ValueChanged','ID',ddwrt:EscapeDelims(string(@ID)),'@Esittaja')}"
    />
    Explanation:
    DataBind operation type parameters (the first parameter) are listed below:
    'i' stands for INSERT,
    'u' stands for UPDATE,
    'd' stands for DELETE.
    http://webcache.googleusercontent.com/search?q=cache:d9HHY4I7omgJ:thearkfloats.blogspot.com/2014/03/sharepoint-2010-data-source-control.html+&cd=4&hl=en&ct=clnk&gl=in
    If this helped you resolve your issue, please mark it Answered

  • Errors in the high-level relational engine. A connection could not be made to the data source with the DataSourceID of '

    When I deploy the cube which is sitting on my PC (local) the following 4 errors come up:
    Error 1 The datasource , 'AdventureWorksDW', contains an ImpersonationMode that that is not supported for processing operations.  0 0 
    Error 2 Errors in the high-level relational engine. A connection could not be made to the data source with the DataSourceID of 'Adventure Works DW', Name of 'AdventureWorksDW'.  0 0 
    Error 3 Errors in the OLAP storage engine: An error occurred while the dimension, with the ID of 'Customer', Name of 'Customer' was being processed.  0 0 
    Error 4 Errors in the OLAP storage engine: An error occurred while the 'Customer Alternate Key' attribute of the 'Customer' dimension from the 'Analysis Services Tutorial' database was being processed.  0 0 

    Sorry hit the wrong button there. That is not entire solution and setting it to default would work when using a single box and not in a distributed application solution. If you are creating the analysis database manually or using the wizard then you can
    set the impersonation to your heart content as long as the right permission has been set on the analysis server.
    In my case I was using MS Project Server 2010 to create the database in the OLAP configuration section. The situation is that the underlying build script has been configured to use the default setting which is the SQL Service account and this account does
    not have permission in Project Server I believe.
    Changing the account to match the Project service account allowed for a successful build \ creation of the database. My verdict is that this is a bug in Project Server because it needs to include the option to choose impersonation when creating the Database
    this way it will not use the default which led to my error in the first place. I do not think there is a one fix for all in relations to this problem it is an environment by environment issue and should be resolved as such. But the idea around fixing it is
    if you are using the SQL Analysis server service account as the account creating the database and cubes then default or service account is fine. If you are using a custom account then set that custom account in the impersonation details after you have granted
    it SQL analysis administrator role. You can remove that role after the DB is created and harden it by creating a role with administrative permissions.
    Hope this helps.

  • Errors in the high-level relational engine. The data source view does not contain a definition for the table or view. The Source property may not have been set.

    Hi All,
    I have a cube in which i'm using the TIME DIM that i created in the warehouse. But now i wanted a new measure in the cube which is Average over time and when i wanted to created the new measure i got a message that no time dim was defined, so i created a
    new time dimension in the SSAS using wizard. But when i tried to process the new time dimension i'm getting the follwoing error message
    "Errors in the high-level relational engine. The data source view does not contain a definition for "SSASTIMEDIM" the table or view. The Source property may not have been set."
    Can anyone please tell me why i cannot create a new measure average over the time using my time dimension? Also what am i doing wrong with the SSASTIMEDIM, that i'm getting the error.
    Thanks

    Hi PMunshi,
    According to your description, you get the above error when processing the time dimension. Right?
    In this scenario, since you have updated the DSV, it should have no problem on the table existence. One possibility is that table has been specified for tracking in the notifications for proactive caching, but isn't available any more for some
    reason. Please change the setting in Proactive Caching into "MOLAP".
    Reference:
    How To Implement Proactive Caching in SQL Server Analysis Services SSAS
    If you have any question, please feel free to ask.
    Best Regards,
    Simon Hou
    TechNet Community Support

  • Error while importing the tables from MySQL using the data source connection

    Hi,
    I am trying to import tables from MySQL into Powerpivot using the data source connection, if use the import using the Query option its working fine but not with the select list of table option.
    when i click on the select list of tables option, i get the below error after selecting all the tables to be imported:
    OLE DB or ODBC error.
    An error occurred while processing table 'XXXXXXXXXX'.
    The current operation was cancelled because another operation in the transaction failed.

    Hi Bharat17an,
    Please provide the detail information when create the MySQL connection in your PowerPivot model. Here is a good article regarding "how to Use MySQL and Microsoft PowerPivot Together" for your reference, please see:
    http://www.datamensional.com/2011/09/how-to-use-mysql-and-microsoft-powerpivot-together-2/
    If this issue still persists, please help to collection windows event log information. It maybe helpful for us to troubleshoot this issue.
    Regards,
    Elvis Long
    TechNet Community Support

  • We are using EBS 12.1.3.  When we input a sales order from a customer we input the sales order and specify the date the customer wants it.  This isn't always the date that we intend on manufacturing it though.  I need to put a customer due date in, but be

    We are using EBS 12.1.3.  When we input a sales order from a customer we input the sales order and specify the date the customer wants it.  This isn't always the date that we intend on manufacturing it though.  I need to put a customer due date in, but be able to put a date in another field that MRP can read in the event we choose to manufacture based on another date.  For example, early.
    Any help would be appreciated.

    What you are experiencing is 100% related to Malware.
    Sometimes a problem with Firefox may be a result of malware installed on your computer, that you may not be aware of.
    You can try these free programs to scan for malware, which work with your existing antivirus software:
    * [http://www.microsoft.com/security/scanner/default.aspx Microsoft Safety Scanner]
    * [http://www.malwarebytes.org/products/malwarebytes_free/ MalwareBytes' Anti-Malware]
    * [http://support.kaspersky.com/faq/?qid=208283363 TDSSKiller - AntiRootkit Utility]
    * [http://www.surfright.nl/en/hitmanpro/ Hitman Pro]
    * [http://www.eset.com/us/online-scanner/ ESET Online Scanner]
    [http://windows.microsoft.com/MSE Microsoft Security Essentials] is a good permanent antivirus for Windows 7/Vista/XP if you don't already have one.
    Further information can be found in the [[Troubleshoot Firefox issues caused by malware]] article.
    Did this fix your problems? Please report back to us!

Maybe you are looking for

  • Java.lang.OutOfMemoryError without ResultSet

    Hi everyone, I am getting a java.lang.OutOfMemoryError when I call a Stored Procedure in Microsof SQL Server... This procedure reads lot of tables to process the data and insert in another table...The process is HUGE in the SQL Server....The SQL Serv

  • FaceTime Doesn't work after Mavericks Update.

    Ever since I updated my Macbook Pro to OSX Mavericks my FaceTime hasn't worked. Everytime I try to call FaceTime just says "FaceTime Failed" please help!!!

  • Can I get a backlit keyboard for my Hp Pavilion M6-1054sa?

    Hi I just got this laptop and sadly it didn't come with a backlit keyboard. I was wondering if it's possible to replace my current keyboard with a backlit? If this is possible, would you know where I could buy the parts needed and how to install them

  • I want a copy of my past purchase

    Is there a way?  Where I can see and make a copy of the stuff I bought online?

  • Adding Secondary DNS entry in server, but Changes didnt take effects

    Hello. we have two servers, at time of installation we didnt configured ADC+secondary Dns in our network, so these servers were configured with only DC and Primary DNS server IP. later on we built ADC with secondary DNS, now when we add ADC+secondary