ReadAllQuery on Aggregate Entities
Hi, I am struggling with implementing a ReadAllQuery in a situation with a table and an aggregated table representing it's primary key fields.
e.g.
@Entity
TheTable {
SOME COLUMNS;
@Entity
TheTable_PK{
PRIMARY KEY COLUMNS
So far I have been using ReadAllQuery and setting the reference class in the usual way. However, I can't work out from experimenting and reading the documentation how I do this with aggregated tables.
It works fine using a NativeQuery and buliding the query up that way, but that doesn't fit in with my framework.
Any advice gratefully received.
Patrick
Aggregates / Embeddables are not independent objects, so you cannot query them directly. You must query for their parent, if you want just the aggregates, you could then collect them up.
You can also query the embeddables directly with JPQL using their parent context, i.e. "Select TheTable.thePrimaryKey from TheTable". In the EclipseLink DatabaseQuery API, this would be a ReportQuery, not a ReadAllQuery, that selects the embeddable.
See,
http://en.wikibooks.org/wiki/Java_Persistence/Embeddables#Querying
James : http://www.eclipselink.org
Similar Messages
-
How to calculate selected hierarchy only
Hello,
I have simple planning application (Hyperion 11.1.2) with all mandatory dimensions.
*"Entity" dimension looks like this:*
COMPANY_1
COSTCENTRE_1_1
PRODUCTIONLINE_1_1_1
PRODUCTIONLINE_1_1_2
PRODUCTIONLINE_1_1_3
COSTCENTRE_1_2
PRODUCTIONLINE_1_2_1
PRODUCTIONLINE_1_2_2
COSTCENTRE_1_3
PRODUCTIONLINE_1_3_1
PRODUCTIONLINE_1_3_2
COMPANY_2
COSTCENTRE_2_1
PRODUCTIONLINE_2_1_1
PRODUCTIONLINE_2_1_2
PRODUCTIONLINE_2_1_3
COSTCENTRE_2_2
PRODUCTIONLINE_2_2_1
PRODUCTIONLINE_2_2_2
COSTCENTRE_2_3
PRODUCTIONLINE_2_3_1
PRODUCTIONLINE_2_3_2
Users from COMPANY_1 and COMPANY_2 import data into essabase (Level0 members) using EAS. Separately for each company and in different time.
My problem is:_
Now, I am trying to prepare calculation script which should calculate all "Measures" and aggregate Entites data for only one selected hierarchy (COMPANY_1 or COMPANY_2).
Something like this (but this is not supported in essbase (11.1.2)):
FIX ("January", "Local", "HSP_InputValue","FY11", "SAct", "VCurrentApproved", *@IRDESCENDANTS("COMPANY_1")*)
CALC DIM ("Measure", *"Entity"*);
ENDFIX
Thank you for any inspiration.
VladislavHi,
Modify your business rule as below and have your users run the calculation from Planning. Assuming they've got access to their entities only, they will be able to select their entities only and this should only aggregate the selected entity:
FIX ("January", "Local", "HSP_InputValue","FY11", "SAct", "VCurrentApproved", @RELATIVE({COMPANY},0))
CALC DIM ("Measure");
ENDFIX
FIX ("January", "Local", "HSP_InputValue","FY11", "SAct", "VCurrentApproved")
@IDESCENDANTS({COMPANY});
ENDFIX
Cheers,
Alp -
Currently i'm working on a provisioning system being developed in WLS 6.1, specifically
the entity data layer.
On a recent EJB training course I attended, the course tutor asserted that it was
possible to implelement dependent objects on the said system. However, all the documentation
I have read since doesn't seem to suggest this at all.
As a particular example, I have an Order entity with a one-to-many relationship with
an OrderItem entity. There is also Party information that I thought could be a dependent
object of the Order entity. In other words, the Order entity maps multiple database
tables. It seems i'll have this problem later on when I begin to model Products
and Features of the OrderItem (i.e. will these have to be modelled as seperate entities
as well, as opposed to mapping the tables within the OrderItem entity).
Any clarification you could give me would be greatly appreciated. Perhaps some of
your own experience in this area, or pointers to useful URLs/books.WLS 7.0 supports one EJB mapping to multiple tables.
Emmanuel Proulx wrote:
Dependent objects as they were introduced in draft EJB2.0 specifications do
not exist in the final version of the spec. In WLS 6.1 CMP, you have to have
1 entity object for 1 database table.
There is the concept of aggregate entity (one entity representing many
database tables) but WLS 6.1 CMP doesn't support that. You can do this with
BMP though.
Also, other products enable you to obtain aggregate entities, like WebGain's
TopLink, which plugs right into WLS.
Good luck,
Emmanuel
"Steven Curtis" <[email protected]> wrote in message
news:[email protected]..
Currently i'm working on a provisioning system being developed in WLS 6.1,specifically
the entity data layer.
On a recent EJB training course I attended, the course tutor asserted thatit was
possible to implelement dependent objects on the said system. However,all the documentation
I have read since doesn't seem to suggest this at all.
As a particular example, I have an Order entity with a one-to-manyrelationship with
an OrderItem entity. There is also Party information that I thought couldbe a dependent
object of the Order entity. In other words, the Order entity mapsmultiple database
tables. It seems i'll have this problem later on when I begin to modelProducts
and Features of the OrderItem (i.e. will these have to be modelled asseperate entities
as well, as opposed to mapping the tables within the OrderItem entity).
Any clarification you could give me would be greatly appreciated. Perhapssome of
your own experience in this area, or pointers to useful URLs/books.
Rajesh Mirchandani
Developer Relations Engineer
BEA Support -
Essbase ASO - How to aggregate all Parents in multiple dimensions when using member formulas
We are trying to add a few MDX member formulae on some of our Accounts in the ASO cube. We recently understood that member formulas in ASO calculate both LEVEL0 and Parent members of other dimensions, So we are trying to tell Essbase to calculate LEVEL0's only and to aggregate all other levels in all dimensions. However we are unable to get the syntax right. Below is what we so far have
AccA has below formula:
CASE WHEN ISLEVEL([Period].Currentmember,0)
THEN
AccX+AccY
WHEN ISLEVEL([Period].Currentmember,1) OR ...etc
THEN SUM({LEAVES([Period].Currentmember)},[AccA])
END
This does work fine and gives correct values for Parent members in Period dimension. But we also have 3 other dimensions like Product, Area and Entity. We tried the below but this throws an error during retrival.
CASE WHEN ISLEVEL([Period].Currentmember,0)
THEN AccX+AccY
WHEN ISLEVEL([Period].Currentmember,1) OR ...etc
THEN SUM({LEAVES([Period].Currentmember),LEAVES([Product].Currentmember)},[AccA])
END
Should we add multiple SUM commands in there? All we want to do is tell Essbase to aggregate all other dimensions to parent levels. Please help.Sorry to reiterate the post again.
DanPressman Was wondering what would be other way to write calculations other than Accounts.
I have a case where the user wants to calculate Ending Equity and this should be calculated at level0 using the rates and all parents of entity has to aggregate.
1. Level0 of entities calculate with rates
2. aggregate its children to parent entity
I have used solve order to get it work but taking a lot of time.
Is there any other alternative way of doing it? Which other dimension i can choose to perform this calculation? This is exactly similar to productsum calculation -
Searching by aggregate type field leads to error message in Workbench
I defined a persistent pojo entity that contains an attribute as an aggregate type mapping.
Now I tried to query entities with an instance of that aggregate type as parameter to the query (TL expression query in the workbench).
First argument (query key) is the attribute of the entity (aggregate type), operator is EQUAL and second argument is the parameter which is of same aggregate type.
I thought this would be the natural way to query but the workbench shows an error message:
0251 - The expression (line 1.) on query readByOwner is invalid. When querying on a reference mapping, only unary operators (Is Null, Not Null) are supported.
Is this really an error? Where is the problem?
I found a similiar post from 2003 on this forum, saying that it is a bug, though I'm unsure if it is the same problem. I use the most recent version of Toplink and Workbench, so it's hard to believe that such a basic problem was not fixed for 5 years... was it?
Regards,
SebastianI have recording messages using the TCD command.
My script (very simple):
MESSAGE ( MSG_2 ).
TCD ( ME21 , ME21_1 , R3 ).
ENDMESSAGE ( E_MSG_2 ).
In the MESSAGE command interface, I defined rules to allow several kind of messages.
Execution: 3 mesages found:
* transform PR into PO
MESSAGE MSG_2 [1,009 sec]
RULES MSG_2 = XML-DATA-01
Message MODE EXIT TYPE ID NR
[1] 'A' 'I' 06 456
[2] 'A' 'W' 'ME' 080
[3] 'A' 'E' 'ZE' 029
TCD ME21 [0,545 sec N] Target sys R3 -> ZDA010A219
S06017 Standard PO created under the number 8201075606
Tgt System Z_A219->R3->ZDA010A219 (ZDA 010 ... HP-UX ORACLE)
CALL TRANSACTION ME21 ME21_1 XML-DATA-01
03 MESSAGES FROM ME21 ME21_1 XML-DATA-01
I 06 456 Release effected with release code 00001
W ME 080 Delivery date: next workday is 02.05.2007
S 06 017 Standard PO created under the number 8201075606
ENDMESSAGE E_MSG_2 (&TFILL = 0)
As you can see, 3 messages are found but the &TFILL variable is still 0.
I guess (but cannot test yet) I would manage to record those messages using SAPGUI command.
Is there anything wrong with my script?
My SAP_BASIS component is in version 620. I'm not using the ultimate version of eCATT (no WEBDYNPRO command, etc.). Could it be an explanation?
Thank you in advance,
Olivier -
Linq to oracle entities - convert decimal to string
hi ,
Im using entityframework beta for oracle
I need compare string to a decimal column in my linq to entities query so i need convert number column to varchar, but .toString is not supported nor Convert.ToString nor SQLFunction.ToVarchar(or something like that) so how can i achieve this? should i import db function or there is some other (easier) way?
Thx for answerHi,
I solved it by importing to_char builtin oracle function to my model:
in edmx file put this (to section where your other procedures/functions are ) (change schema attribute to yours):
<Function Name="to_char" ReturnType="varchar2" Aggregate="false" BuiltIn="true" NiladicFunction="false" IsComposable="true" ParameterTypeSemantics="AllowImplicitConversion" StoreFunctionName="to_char" Schema="TOPASFO_DEMO">
<Parameter Name="value" Type="number" Mode="In" />
</Function>
And make partial class of your entities class and put there this:
[EdmFunction("Store", "to_char")]
public string to_char(decimal value)
// don’t need to implement the function
throw new ApplicationException();
And you can call it just like your other imported functions/procedures. -
Index's on cubes or Aggregates on infoobjects
Hello,
Please tell me if it is possible to put index's on cubes; are they automatically added or is this something I put on them?
I do not understand index's are they like aggregates?
Need to find info that explains this.
Thanks for the hlep.
NewbieIndexes are quite different from aggregates.
An Aggregate is a slice of a cube which helps the data retrival on a faster note when a query is executed on a cube. Basically it is kind of a snapshot of KPI's and Business Indicators (Chars) which will be displayed as the initial query run result.
Index is a process which is inturn will reduce the query response time. While an object gets activated, the system automatically create primary indexes. Optionaly, you can create additional index called secondary indexes.Before loading data, it is advisable to delete the indexes and insert them back after the loading.
Indexes act like pointers for quickly geting the Data.When u delete it will delete indexes and when u create it will create the indexes.
When loading we delete Bcs during loading it has to look for existing Indexes and try to update so it will effect the Data load performence so we delete and create it will take less time when compared to updating the existing ones.
There is one more issue we have to take care if u r having more than 50 million records this is not a good practice insteah we can delete and create during week end when they r no users. -
How to? Populate sequence in 2 Entities comprising 1-1 View Object
Hello all,
In my DB model, I have modeled sub types of an entity by having a main table with the common fields and then separate tables (with 1-1 relationship) to the main table with the fields that are specific to each type. Each of these tables has an ID column as the PK. I have created a trigger on the main table to auto-populate the ID from a sequence. If I have a VO based upon 2 of the entities (in a 1-1 relationship), how can I get the sequence # from the main table in order to populate the specialized table?
Regards,
johnThe TreeViewAdaptor is responsible for mapping your custom data to the tree view itself. I almost always start by making it return some fixed number of objects with names "item 1" etc. That way you get the tree view working first.
Then, after you get it laid out and displaying properly, you can worry about using real data. At that point, you have your adaptor return the actual number of items in your list and each individual item. Then you can populate your list when you push your button and then invalidate the IControlView of the tree view widget to cause it to redraw. At that point your adaptor will get called and your data should appear.
Jon
"Expert for hire" -
How can I see the data in the aggregates
how can see the data available in the aggregates.
JayHi Jay,
its so simple,
please goto the manage aggregates screen and copy the technical name of the aggregate and add
/bic/exxx xxx is the aggregate technical name, and for f fat table use /bic/fxxx, and go to se16 and enter the table name and thats it ur data is with u.
R -
Can not see any entities in The ER diagram
I am trying to create an ER diagram from an existing database. I am using designer 10.1. I generated a server model, the table to entity retrofit successfully. I can see all the entities and attributes in the RON. But when I try to create ER diagram , I don't see any entities or attributes. Am I missing something ??
Any help???? Thanks in advance..If version control is not turned on, then there is only one workarea, "GLOBAL SHARED WORKAREA" and EVERYTHING is in it. So that isn't the problem.
Let's step through this a little:
You start the E/R diagrammer and open a new diagram.
You are prompted to Select a default container for this diagram, and you probably selected the application (an application is one of two types of containers) where you created the entities.
So you have a blank diagram.
You select the Edit menu, and "Include...", "Entity".
You should get a window with a navigator in it, with the entities in the default container showing.
If not, you should be able to open the tree to view and select those entities.
Is this where you don't get the list of entities that you expected to see?
One possible problem is Access Rights - in RON, right click the application, and choose "View Access Rights". Make sure that your username has the right to SELECT that application. By the way, you ARE using a username other than the owner of the repository, right? The only time you should connect as the repository owner is in the RAU. Use the RAU to grant another user (have the DBA create another user, if you haven't already) access to the repository. Then, in RON, make that user the owner of your application. -
Aggregates, VLAN's, Jumbo-Frames and cluster interconnect opinions
Hi All,
I'm reviewing my options for a new cluster configuration and would like the opinions of people with more expertise than myself out there.
What I have in mind as follows:
2 x X4170 servers with 8 x NIC's in each.
On each 4170 I was going to configure 2 aggregates with 3 nics in each aggregate as follows
igb0 device in aggr1
igb1 device in aggr1
igb2 device in aggr1
igb3 stand-alone device for iSCSI network
e1000g0 device in aggr2
e1000g1 device in aggr2
e1000g2 device in aggr3
e1000g3 stand-alone device of iSCSI network
Now, on top of these aggregates, I was planning on creating VLAN interfaces which will allow me to connect to our two "public" network segments and for the cluster heartbeat network.
I was then going to configure the vlan's in an IPMP group for failover. I know there are some questions around that configuration in the sense that IPMP will not detect a nic failure if a NIC goes offline in the aggregate, but I could monitor that in a different manner.
At this point, my questions are:
[1] Are vlan's, on top of aggregates, supported withing Solaris Cluster? I've not seen anything in the documentation to mention that it is, or is not for that matter. I see that vlan's are supported, inluding support for cluster interconnects over vlan's.
Now with the standalone interface I want to enable jumbo frames, but I've noticed that the igb.conf file has a global setting for all nic ports, whereas I can enable it for a single nic port in the e1000g.conf kernel driver. My questions are as follows:
[2] What is the general feeling with mixing mtu sizes on the same lan/vlan? Ive seen some comments that this is not a good idea, and some say that it doesnt cause a problem.
[3] If the underlying nic, igb0-2 (aggr1) for example, has 9k mtu enabled, I can force the mtu size (1500) for "normal" networks on the vlan interfaces pointing to my "public" network and cluster interconnect vlan. Does anyone have experience of this causing any issues?
Thanks in advance for all comments/suggestions.For 1) the question is really "Do I need to enable Jumbo Frames if I don't want to use them (neither public nore private network)" - the answer is no.
For 2) each cluster needs to have its own seperate set of VLANs.
Greets
Thorsten -
Aggregates on Non-cumulative InfoCubes, stock key figures, stock, stocks,
Hi..Guru's
Please let me know if anybody has created aggregates on Non-Cumulative Cubes or key figure (i.e. 0IC_C03 Inventory Management.)
I am facing the problem of performance related at the time of execution of query in 0IC_C03.( runtime dump )
I have tried lot on to create aggregate by using proposal from query and other options. But its not working or using that aggr by query.
Can somebody tell me about any sample aggr. which they are using on 0ic_c03.
Or any tool to get better performance to execute query of the said cube.
One more clarification req that what is Move the Marker pointer for stock calculation. I have compressed only two inital data loading req. should I compress the all req in cube (Regularly)
If so there would be any option to get req compress automatically after successfully load in data target.
We are using all three data sources 2lis_03_bx,bf & um for the same.
Regards,
NavinHi,
Definately the compression has lot of effect on the quey execution time for Inventory cubes <b>than</b> other cumulated cubes.
So Do compression reqularly, once you feel that the deletion of request is not needed any more.
And ,If the query do not has calday characterstic and need only month characterstic ,use Snap shot Info cube(which is mentioned and procedure is given in How to paper) and divert the month wise(and higher granularity on time characterstic ,like quarter & year) queries to this cube.
And, the percentage of improvement in qury execution time in case of aggregates is less for non cumulated cubes when compared to other normal(cumulated) cubes. But still there is improvement in using aggregates.
With rgds,
Anil Kumar Sharma .P
Message was edited by: Anil Kumar Sharma -
Re: How to Improve the performance on Rollup of Aggregates for PCA Infocube
Hi BW Guru's,
I have unresolved issue and our team is still working on it.
I have already posted several questions on this but not clear on how to reduce the time on Rollup of Aggregates process.
I have requested for OSS note and searching myself but still could not found.
Finally i have executed one of the cube in RSRV with the database selection
"Database indexes of an InfoCube and its aggregates" and got warning messages i was tried to correct the error and executed once again but still i found warning message. and the error message are as follows: (this is only for one info cube we got 6 info cubes i am executing one by one).
ORACLE: Index /BI0/IACCOUNT~0 has possibly degenerated
ORACLE: Index /BI0/IPROFIT_CTR~0 has possibly degenerated
ORACLE: Index /BI0/SREQUID~0 has possibly degenerated
ORACLE: Index /BIC/D1001072~010 has possibly degenerated
ORACLE: Index /BIC/D1001132~010 has possibly degenerated
ORACLE: Index /BIC/D1001212~010 has possibly degenerated
ORACLE: Index /BIC/DGPCOGC062~01 has possibly degenerated
ORACLE: Index /BIC/IGGRA_CODE~0 has possibly degenerated
ORACLE: Index /BIC/QGMAPGP1~0 has possibly degenerated
ORACLE: Index /BIC/QGMAPPC2~0 has possibly degenerated
ORACLE: Index /BIC/SGMAPGP1~0 has possibly degenerated
i don't know how to move further on this can any one tell me how to tackle this problem to increase the performance on Rollup of Aggregates (PCA Info cubes).
every time i use to create index and statistics regularly to improve the performance it will work for couple of days and again the performance of the rollup of aggregates come down gradually.
Thanks and Regards,
Venkathi,
check in a sql client the sql created by Bi and the query that you use directy from your physical layer...
The time between these 2 must be 2-3 seconds,otherwise you have problems.(these seconds are for scripts that needed by Bi)
If you use "like" in your sql then forget indexes....
For more informations about indexes check google or your Dba .
Last, i mentioned that materialize view is not perfect,it help a lot..so why not try to split it to smaller ones....
ex...
logiacal dimensions
year-half-day
company-department
fact
quantity
instead of making one...make 3,
year - department - quantity
half - department - quantity
day - department - quantity
and add them as datasource and assign them the appropriate logical level at bussiness layer in administrator...
Do you use partioning functionality???
i hope i helped....
http://greekoraclebi.blogspot.com/
/////////////////////////////////////// -
Questions regarding aggregates on cubes
Can someone please answer the following questions.
1. How do I check whether someone is re-bilding aggregates on a cube?
2. Does rebuilding an aggregate refer to the rollup process? Can it take a few hours?
3. What does it mean when someone switches off an aggregate, basically what is the difference (conceptually/time consumption)between:
A. activating an aggregate?
B. switching off/on an aggregate?
C. rebuilding an aggregate?
3. When a user complains that a query is running slow, do we build an aggregate based on the chars in rows & free chars in that query OR is there anything else we need to include?
4. Does database statistics in the 'MANAGE' tab of a cube only show statistics or does it do anything to improve the load/query performance on the cube?
Regards,
Srinivas.1. How do I check whether someone is re-bilding aggregates on a cube?
If your aggregate status is in red and you are filling up the aggregate - it is an initial fill of the aggregate and filling up would mean loading the data from the cube into the aggregate in full.
2. Does rebuilding an aggregate refer to the rollup process? Can it take a few hours?
Rebuilding of an aggregate is to reload the data into the aggregate from the cube once again.
3. What does it mean when someone switches off an aggregate, basically what is the difference (conceptually/time consumption)between:
A. activating an aggregate?
this would mean recreating the data structures for the aggregate - this would mean dropping the data and reloading the data.
B. switching off/on an aggregate?
Switching off an aggregate means that it will not be used by the OLAp processor but would mean that the aggregate still gets rolled up. Rollup referring to loading changed data from the cube into the aggregate - this is done based n the requests that have not yet been rolled up into the cube.
C. rebuilding an aggregate?
Reloading data into the aggregate
3. When a user complains that a query is running slow, do we build an aggregate based on the chars in rows & free chars in that query OR is there anything else we need to include?
Run the query in RSRT and do an SQl view of the query and check the characteristics that are used in the query and then include the same into your aggregate.
4. Does database statistics in the 'MANAGE' tab of a cube only show statistics or does it do anything to improve the load/query performance on the cube?
Stats being updated will improve the execution plans on the database. Making sure that stats are up to date would lead to better execution plans and hence possibly better performance but it cannot eb taken for granted that refreshing stats is going to improve query performance. -
Back end activities for Activation & Deactivation of Aggregates
Hi ,
Could any body help me to understand the back-end activites performed at the time of activation and deactivation of aggregates.
Is filling of Agreegate is same as Roll up?
What is the diffrence between de-activation and deletion of Aggregate?
Thanks.
SantanuHi Bose,
Activation:
In order to use an aggregate in the first place, it must be defined activated and filled.When you activate it, the required tables are created in the database from theaggregate definition. Technically speaking, an aggregate is actually a separate BasicCube with its own fact table and dimension tables. Dimension tables that agree with the InfoCube are used together. Upon creation, every aggregate is given a six-digit number that starts with the figure1. The table names that make up the logical object that is the aggregate are then derived in a similar manner, as are the table names of an InfoCube. For example, if the aggregate has the technical name 100001, the fact tables are called: /BIC/E100001 and /BIC/F100001. Its dimensions, which are not the same as those in the InfoCube,have the table names /BIC/D100001P, /BIC/D100001T and so on.
Rollup:
New data packets / requests that are loaded into the InfoCube cannot be used at first for reporting if there are aggregates that are already filled. The new packets must first be written to the aggregates by a so-called “roll-up”. In other words, data that has been recently loaded into an InfoCube is not visible for reporting, from the InfoCube or aggregates, until an aggregate roll-up takes place. During this process you can continue to report using the data that existed prior to the recent data load. The new data is only displayed by queries that are executed after a successful roll-up.
Go for the below link for more information.
http://sapbibw2010.blogspot.in/2010/10/aggregates.html
Naresh
Maybe you are looking for
-
Can anyone tell me how to make cursor smaller after inserting picture so that l can add somethimg right next to it instead of bottom of picture. Meaning the next line is too low
-
Maximum number of columns in a table or view is 1000
Post Author: TinaReifer CA Forum: Formula I am trying to create a direct link from Tririga / Oracle database into Crystal XI. The data I am attempting to pull is from the RETransaction (Escalations). The error I keep receiving is shown below. Has a
-
Can I use a 1 TB EHD which has an iPhoto library on it for Time Machine backup?
Or do I need a separate EHD for that? I'm trying to make an ancient 2008 MBP (running OS X 10.6.8, 2.5 GHz Intel Core 2 Duo, 2 GB 667 MHZ DDR2 SDRAM) last me for another 6-9 mos. With my hard drive about full I ordered an EHD and copied my iPhoto lib
-
Selection of data from RSEG for various PO line items
Hi , We have a requirement to calculate the total goods receipt value and break up cost for transportation also. In purchase order there are various conditions (base price, transportation,surcharge). I am trying to get all the goods movement for a PO
-
How to restore Firmware on New MacBook Pro late 2008?
Hello, when I migrated contents from my old MBP to the New MBP it also transferred some of the utility software, and I accidentally activated FanControl and that seemed to permanently altered the fan setting of the new MBP. How can I reset the firmwa