Data flow of logical partitions via MultiProvider
Hi experts,
I need to report on an InfoCube which will hold 2 years of data totals about 120 million records..So I would like to use logical partition to distribute the data to other InfoCubes based on time characteristic e.g. month..But I am not sure how can I achieve this? What I would like to know is the procedure, e.g. how to set up DTPs, how to use Process Chain to automate the data load, recommended strategy/approach etc...
Your advice is highly appreciated!
Thanks in advance.
Regards,
Meng
Hi Joon,
In case of logical partitioning the very first import thing is reporting will always be done on Multiprovider so as to keep the reporting unchanged with respect to underlying infoprovider changes.
Now you can have different approaches, let us say in your example you will have to create two different infocubes one for say current year data (I assume already existing) and one for previous year data.
Create transformation and DTP between current cube and previous year data cube and move all the previous year data from current cube to newly created cube using delta method by giving month or year selection in DTP selections. Once you move the data validate the data in target cube and do the selective deletion on the current cube, so as to avoide the data duplication. Compression on current data cube will be good as most of the requests will be empty after selective deletion only holding request ID's.
Till now you have separated the current year and previous year data in two separate cubes, create a multiprovider on top of these cubes and you are free to create reports on top of multiprovider.
Now when you are loading data from source system you have two options,
1) Keep the current data flow as it is for current cube from source system and load all the history data in current cube and let it be as it is in current cube, being in the same multiprovider your overall reporting will not have any impact. But if you want you can move this data from current cube to previous cube using earlier used delta DTP and do the selective deletion again. You can even automate this process.
2) In second scenario you can create individual flows to current and history cube and do the selective loading from source system. Your data will be directly loaded to both these cubes but the extraction from source system will be twice from the same source.
Regards,
Durgesh.
Similar Messages
-
Logical Partitioning -How much %age improvement.
Hello All,
I want to know if we use Logical partitioning for cube then how much %age performance will be improve and how about its depends on loaded data into the cube.
Q1:Logical partitioning of cubes - How much % of performance can be expected.
Q2:How much % of performance improvement can be expected if say no. of records in cube is half the current no. of records.
Thanks in advance,
Bandana.Bandana,
I would say that the question is incorrect ..( Carl Sagan..? )
First - the percentage improvement depends on the amount of data - if my cube has 100 records and I do a logical partition I will not see any improvement - but then ramp up the numbers to 1 billion records - I see very significant improvements..
Lets say your cube has data for 2010 and 2011 and you want to logically separate out the data . Typically logical partitions would mean separating out the data into separate cubes by year / country etc - any partition that is not using the default partitioning of 0FISCPER / 0CALMONTH etc...
If this is not what you are looking at or if I am wrong - please correct me here...
Now if you split your cube into two parts - one cube for 2010 and one for 2011 - the time required to access the data in the 2010 cube decreases - because it has a cube of its own - this is the benefit you get in terms of data access times - but then this is not going to be a 100% improvement because you have a cube for each year... it depends to a large extent on the query also - but in almost all cases you will have a performance benefit.
I am not able to understand your second question. -
SSAS 2008R2: Dynamic partitioning via SSIS - is this logic correct?
Hi all,
I'm setting up dynamic partitioning on a large financial cube and implementing a 36month sliding window for the data. Just want to make sure that my logic is correct.
Basically, is doing a process update of all the dims then a process default of my facts (after i've run the xmla to add/remove partitions) enough to have a fully processed (and performant/aggregated) and accurate cube?
Assume I have a fact that has a 'reporting month', 'location key' and then numerous measures and dim keys. It holds the revenue for that location for the reporting month.
The reporting month can never be backdated. subsequent runs can only overwrite the current reporting month or add the next month.
Assume the data warehouse has been loaded successfully. The warehouse holds a 72month rolling history.
Now, to the dynamic partitioning. The fact is partitioned by reporting month and has aggregation designs.
My SSIS package initially does a process update on all the dimensions. My understanding is that this 'flags' which existing measure partitions need to be reindexed.
Then in my data flow:
I run a simple query over my fact (select 'my partition ' + str(billmonth,6) AS PartitionName, count(*) as EstCount from myFact where billmonth > 36months ago group by billmonth order by PartitionName) to get a list of all the partitions that exist
in the data warehouse and that should be in the cube.
I do a full outer merge on the partition name with the equivalent of that but from my cube. I use a script component as a source with the following code:
AMO.Server amoServer;
AMO.MeasureGroup amoMeasureGroup;
public override void PreExecute()
base.PreExecute();
amoServer = new AMO.Server();
amoServer.Connect(Connections.Cube.ConnectionString);
amoMeasureGroup = amoServer.Databases.FindByName(amoServer.ConnectionInfo.Catalog.ToString()).Cubes.FindByName(Variables.CubeName.ToString()).MeasureGroups.FindByName(Variables.MeasureGroupName.ToString());
amoServer.CaptureXml = true;
public override void PostExecute()
base.PostExecute();
amoServer.Dispose();
public override void CreateNewOutputRows()
try
foreach (AMO.Partition OLAPPartition in amoMeasureGroup.Partitions)
Output0Buffer.AddRow();
Output0Buffer.PartitionName = OLAPPartition.Name;
catch(Exception e)
bool Error = true;
this.ComponentMetaData.FireError(-1, this.ComponentMetaData.Name, String.Format("The measure group {0} could not be found. " + e.ToString(),Variables.MeasureGroupName.ToString()), "", 0, out Error);
throw;
(not a c# coder, above stolen + butchered from elsewhere.. but it seems to work)
I use a conditional split to separate the rows where datawarehouse.PartitionName is null (generate XMLA to delete from cube) and cube.PartitionName is null (generate xmla to add to cube). I dont do anything with partitions that exist in both the cube and
data warehouse.
I then perform a process default of the measure group.
I'm assuming this will do a 'process full' of the new unprocessed partitions, and that it'll do a process data + process index of any partitions that were modified by the dimensions' process update. Is this correct? Or do I need to do any other explicit
processing of my measure groups to make sure my facts are 100% accurate?
Thanks.
Jakub @ Adelaide, Australiacheers, i'll switch it to use getbyname instead
The reprocessing of the current month includes steps in the SSIS package flow that explicitly remove the data from the relational data warehouse(delete from) and the cube (XMLA delete statement against partitions)
Yes, I do have other measures groups in the cube.
I have five measure groups in total. Three are dynamically partitioned while the other two have a single partition.
What will happen to new data in the two single partition measure groups? I did some further reading and now my understanding is that a process default might not process the aggregates if there are no dimension changes and new fact data arrives.
I'm now thinking of making my data flow:
1. execute dynamic partitioning XMLA
2. process update dimensions with affected objects included (this'll reprocess existing dynamic partitions that are modified by any dim changes)
3. process default 3 dynamically partitioned measure groups (this'll process any newly added dynamic partitions)
4. process full 2 single partition measure groups - this step might redo some of the work done in step 2., but the measure groups are only a few million rows in one case and a few hundred in the other with minimal growth expected. And what you just said about
changing data made me realise I have the sliding window in these last two implemented via the source script, so I need to do a process full here anyway.
Jakub @ Adelaide, Australia -
How Can I include a logical data model in a Data Flow Diagram?
Hi,
I have done a logical data model and now, I want to include it in a data flow diagram. I do not know which element I should use to make this relation.
Thanks,Hi,
you need to create "Information structure" in the flow properties dialog and then can assign attributes to that information structure in its properties dialog
Philip -
Automatic creation of BW data flow documentation
Dear Gurus,
I need to write documentation of the data flow of a huge project which I haven't implemented by myself.
The documentation should contain a mapping of the objects in the dataprovider, towards objects in the source system(s).
Eventually with the info in which dataproviders the objects are included, e.g. between the multiprovider and the source system.
Details of transformations can be ignored; eventually mentioning there's a routine involved, but that's the maximum.
With the data repository, I can have the content of cubes in a graphical overview, but it doesn't really provide me useful information.
You can imagine I prefer an automatic way to create this documentation.
Anybody who knows a solution, even if it only provides part of the purpose?
Any solution via query, standard SAP or customized program, ...
Recommendations would be very highly appreciated!
Thx & Rgds, samWorldwide documentation is made on SAP BW projects, but no reply on automatic documentation.
A lot of time must be lost by manually creating documentation on mapping objects to source system fields.
==> SAP, please, work out a solution.
I didn't find a satisfying solution, but I've done it the following way:
List all objects for a multiprovider via the meta data repository, and paste in excel document.
Then listing all objects for the underlying dataproviders, and paste in separate sheets of this excel.
Compare the objects of the MP with the objects on the other sheets using excel functions, and sign when a dataprovider contains a certain object.
For the datasources, I checked if an object is present, and if yes, give the original source field.
This in summary as a not optimal and not complete solution, but it prevents making mistakes.
Rgds. sam -
Logical partitioning, pass-through layer, query pruning
Hi,
I am dealing with performance guidelines for BW and encountered few interesting topics, which however I do not fully undestand.
1. Mainetance of logical partitioning.
Let's assume logical partitioning is performed on year. Does it mean that every year or so it is necessary to create additional cube/transformation and modify multiprovider? Is there any automatic procedure by SAP that supports creation of new objects, or it is fully manual?
2.Pass- though layer.
There are very few information about this basic concept. Anyway:
- is pass through DSO write optimized one? Does it store only one loading - last one? Is it deleted after lading is sucessfully finished (or before new load starts)? And - does this deletion do not destroy delta mechanism? Is the DSO replacing PSAfunctionally (i.e. PSA can be deleted every load as well)?
3. Query pruning
Does this happen automatically on DB level, or additional developments with exits variables, steering tables and FMs is required?
4. DSOs for master data loads
What is the benefit of using full MD extraction and DSO delta insetad of MD delta extraction?
Thanks,
Marcin1. Mainetance of logical partitioning.
Let's assume logical partitioning is performed on year. Does it mean that every year or so it is necessary to create additional cube/transformation and modify multiprovider? Is there any automatic procedure by SAP that supports creation of new objects, or it is fully manual?
Logical partitioning is when you have separate ODS / Cubes for separate Years etc ....
There is no automated way - however if you want to you can physically partition the cubes using time periods and extend them regularly using the repartitioning options provided.
2.Pass- though layer.
There are very few information about this basic concept. Anyway:
- is pass through DSO write optimized one? Does it store only one loading - last one? Is it deleted after lading is sucessfully finished (or before new load starts)? And - does this deletion do not destroy delta mechanism? Is the DSO replacing PSAfunctionally (i.e. PSA can be deleted every load as well)?
Usually a pass through layer is used to
1. Ensure data consistency
2. Possibly use Deltas
3. Additional transformations
In a write optimized DSo - the request ID is key and hence delta is based on request ID. If you do not have any additional transformations - then a Write optimized DSO is essentially like your PSA.
3. Query pruning
Does this happen automatically on DB level, or additional developments with exits variables, steering tables and FMs is required?
The query pruning - depends on the run based and cost based optimizers within the DB and not much control over how well you can influence the execution of a query other than havin up to date statistics , building aggregates etc etc.
4. DSOs for master data loads
What is the benefit of using full MD extraction and DSO delta insetad of MD delta extraction?
It depends more on the data volumes and also the number of transformation required...
If you have multiple levels of transformations - use a DSO or if you have very high data volumes and want to identify changed records - then use a DSO. -
Physical Vs Logical Partitioning
We have 2 million records in the sales infocube for 3 years. We are currently discussing the pros and cons of using Logical partitioning Vs Physical Partitioning. Please give your inputs.
hi
there are two types of partitioning generally talked about with SAP BW, logical and physical partitioning.
Logical partitioning - instead of having all your data in a single cube, you might break into separate cubes, with each cube holding aspecific year's data, e.g. you could have 5 sales cubes, one for each year 2001 thru 2005.
You would then create a Multi-Provider that allowed you to query all of them together.
A query that needs data from all 5 years would then automatically (you can control this) be split into 5 separate queries, one against each cube, running at the same time. The system automatically merges the results from the 5 queries into a single result set.
So it's easy to see when this could be a benefit. If your queries however are primarily run just for a single year, then you don't receive the benefit of the parallel processing. In non-Oracle DBs, splitting the data like this may still be a benefit by reducing the amount of rows in the fact table that must be read, but does not provide as much value to an Oracle DB since Infocube queries are using a Star_Transformation.
Physical Partitioning - I believe only Oracle and Informix currently support Range partitioning. This is a separately licensed option in Oracle.
Physical partitioning allows you to split an Infocube into smaller pieces. The pieces, or partitions, can only be created by 0FISCPER or 0CALMONTH for an InfoCube (ODSs can be partitioned, but require a DBAs involvement). The DB can then take advantage of this partitioning by "pruning" partitions during a query, e.g. a query only needs data form June 2005
The DB is smart enough to restrict the indices and data it will read to the June 2005 partition. This assumes your query restricts/filters on the partitioning characteristic. It can apply this pruning to a range of partitions as well, e.g. 0FISCPER 001/2005 thru 003/2005 would only look at the 3 partitions.
It is NOTsmart enough, however, to figure out that if your restrict to 0FISCYEAR = 2005, that it should only read 000/2005 thru 016/2005 since 0FISCYEAR is NOT the partitioning characteristic.
An InfoCube MUST be empty in order to physically partition it. At this time, there is no way to add additional partitions thru AWB, so you want to make sure that you create partitions out into the future for at least a of couple of years.
If the base cube is partitioned, any aggregates that contain the partitioning characteristic (0CALMONTH or 0FISCPER) will automatically be partitioned.
In summary, you need to figure out if you want to use physical or logical partitioning on the cube(s), or both, as they are not mutually exclusive.
So you would need to know how the data will be queried, and the volume of data. It would make little sense to partition cubes that will not be very large.
physical partitioning is done at database level and logical partitioning done at data target level.
Cube partitioning with time characteristics 0calmonth Fiscper is physical partitioning.
Logical partitioning is u partition ur cube by year or month that is u divide the cube into different cubes and create a multiprovider on top of it.
logical Vs physical partitions ? -
Cannot create another 2 logical partitions on another physical server
when i installed the BI 7.0 on AIX/DB2 9 platform. i can create 2 logical partition on the main server , yet i couldn't create another 2 logical parttions on the second server. the following is the error message
INFO 2008-02-21 03:49:03.490
"sapinst_dev.log" [Read only] 20411 lines, 744293 characters
TRACE 2008-02-21 03:51:28.513 [iaxxejsexp.cpp:199]
EJS_Installer::writeTraceToLogBook()
Found Error, error_codes[1] = <db2start dbpartitionnum 5 add dbpartitionnum hostname sapaix08 port 3 without tablespaces
SQL6073N Add Node operation failed. SQLCODE = "-1051".>
TRACE 2008-02-21 03:51:28.513 [iaxxejsexp.cpp:199]
EJS_Installer::writeTraceToLogBook()
During execution of <AddPart.sql>, <2> errors occured.
ERROR 2008-02-21 03:51:28.513 [iaxxinscbk.cpp:282]
abortInstallation
MDB-01999 Error occured, first error is: <SQL6073N Add Node operation failed. SQLCODE = "-1051".>
TRACE 2008-02-21 03:51:28.514 [iaxxejsbas.hpp:388]
handleException<ESAPinstException>()
Converting exception into JS Exception Exception.
ERROR 2008-02-21 03:51:28.515
CJSlibModule::writeError_impl()
MUT-03025 Caught ESAPinstException in Modulecall: ESAPinstException: error text undefined.
TRACE 2008-02-21 03:51:28.515 [iaxxejsbas.hpp:460]
EJS_Base::dispatchFunctionCall()
JS Callback has thrown unknown exception. Rethrowing.
ERROR 2008-02-21 03:51:28.516 [iaxxgenimp.cpp:731]
showDialog()
FCO-00011 The step AddDB6Partitions with step key |NW_DB6_DB_ADDPART|ind|ind|ind|ind|0|0|NW_DB6_AddPartitions|ind|ind|ind|ind|12|0|
AddDB6Partitions was executed with status ERROR .
TRACE 2008-02-21 03:51:28.539 [iaxxgenimp.cpp:719]
showDialog()
the following is my prerequisite for the installation
1. the user and group id and property is the same as the primary (server1)
2. the ssh trust relationship has built , i can ssh server1 from server2 or server2 from server1 with db2sid, sidadm users
3. i mount the /db2/db2sid , /db2/SID/db2dumps, /sapmnt/SID/exe on server2 as NFS
4. install the db2 software on /opt/IBM/db2/V9.1 (the location is the same as the primary's)
HI , DB2 experts. Could you give me some suggestion? thanks!Hi,Thomas,
Thanks for your help. the db2 database desn't use the autostoarge method and the relevant permission is the same as server1. i checked the db2dialog.log. the following is the detail information
"Storage path does not exist or is inaccessible" is the error message . i was wondering which storage path does not exit or inaccessible .
at the same time i have login on all /db2 with db2sid and run touch to test the permision . it sounds good. I don't know what happens , could you give me some suggestion ? thanks!
2008-02-21-08.10.56.442000-300 I14165596A287 LEVEL: Event
PID : 843832 TID : 1 PROC : db2stop2
INSTANCE: db2ab7 NODE : 005
FUNCTION: DB2 UDB, base sys utilities, DB2StopMain, probe:240
DATA #1 : String, 26 bytes
Stop phase is in progress.
2008-02-21-08.10.56.444783-300 I14165884A302 LEVEL: Event
PID : 843832 TID : 1 PROC : db2stop2
INSTANCE: db2ab7 NODE : 005
FUNCTION: DB2 UDB, base sys utilities, DB2StopMain, probe:250
DATA #1 : String, 41 bytes
Requesting system controller termination.
2008-02-21-08.10.56.450366-300 I14166187A403 LEVEL: Warning
PID : 712906 TID : 1 PROC : db2sysc 5
INSTANCE: db2ab7 NODE : 005
FUNCTION: DB2 UDB, routine_infrastructure, sqlerKillAllFmps, probe:5
MESSAGE : Bringing down all db2fmp processes as part of db2stop
DATA #1 : Hexdump, 4 bytes
0x0FFFFFFFFFFFE400 : 0000 0000 ....
2008-02-21-08.10.56.456345-300 I14166591A304 LEVEL: Event
PID : 843832 TID : 1 PROC : db2stop2
INSTANCE: db2ab7 NODE : 005
FUNCTION: DB2 UDB, base sys utilities, DB2StopMain, probe:260
DATA #1 : String, 43 bytes
System controller termination is completed.
2008-02-21-08.10.56.461462-300 I14166896A381 LEVEL: Event
PID : 843832 TID : 1 PROC : db2stop2
INSTANCE: db2ab7 NODE : 005
FUNCTION: DB2 UDB, base sys utilities, DB2StopMain, probe:280
DATA #1 : String, 24 bytes
There is no active EDUs.
DATA #2 : Hexdump, 4 bytes
0x0FFFFFFFFFFFCEE0 : 0000 0000 ....
2008-02-21-08.10.56.504322-300 I14167278A342 LEVEL: Severe
PID : 823374 TID : 1 PROC : db2acd 5
INSTANCE: db2ab7 NODE : 005
FUNCTION: DB2 UDB, routine_infrastructure, sqlerFmpOneTimeInit, probe:100
DATA #1 : Hexdump, 4 bytes
0x0FFFFFFFFFFFF5A4 : FFFF FBEE ....
2008-02-21-08.10.56.654959-300 E14167621A301 LEVEL: Event
PID : 843832 TID : 1 PROC : db2stop2
INSTANCE: db2ab7 NODE : 005
FUNCTION: DB2 UDB, base sys utilities, DB2StopMain, probe:911
MESSAGE : ADM7514W Database manager has stopped.
STOP : DB2 DBM
2008-02-21-08.11.09.664000-300 I14167923A417 LEVEL: Event
PID : 639410 TID : 1 PROC : db2stop
INSTANCE: db2ab7 NODE : 000
FUNCTION: DB2 UDB, base sys utilities, sqleIssueStartStop, probe:1100
DATA #1 : String, 53 bytes
/db2/db2ab7/sqllib/adm/db2rstop db2profile FORCE1 0 0
DATA #2 : Hexdump, 4 bytes
0x0FFFFFFFFFFF9CBC : 0000 0022 ..."
2008-02-21-08.11.10.176098-300 I14168341A417 LEVEL: Event
PID : 639410 TID : 1 PROC : db2stop
INSTANCE: db2ab7 NODE : 000
FUNCTION: DB2 UDB, base sys utilities, sqleIssueStartStop, probe:1100
DATA #1 : String, 53 bytes
/db2/db2ab7/sqllib/adm/db2rstop db2profile FORCE1 1 1
DATA #2 : Hexdump, 4 bytes
0x0FFFFFFFFFFF9CBC : 0000 0022 ..."
2008-02-21-08.11.10.595702-300 I14168759A417 LEVEL: Event
PID : 639410 TID : 1 PROC : db2stop
INSTANCE: db2ab7 NODE : 000
FUNCTION: DB2 UDB, base sys utilities, sqleIssueStartStop, probe:1100
DATA #1 : String, 53 bytes
/db2/db2ab7/sqllib/adm/db2rstop db2profile FORCE1 2 0
DATA #2 : Hexdump, 4 bytes
0x0FFFFFFFFFFF9CBC : 0000 0022 ..."
2008-02-21-08.11.11.124888-300 I14169177A417 LEVEL: Event
PID : 639410 TID : 1 PROC : db2stop
INSTANCE: db2ab7 NODE : 000
FUNCTION: DB2 UDB, base sys utilities, sqleIssueStartStop, probe:1100
DATA #1 : String, 53 bytes
/db2/db2ab7/sqllib/adm/db2rstop db2profile FORCE1 3 1
DATA #2 : Hexdump, 4 bytes
0x0FFFFFFFFFFF9CBC : 0000 0022 ..."
2008-02-21-08.11.12.070605-300 I14169595A410 LEVEL: Event
PID : 639410 TID : 1 PROC : db2stop
INSTANCE: db2ab7 NODE : 000
FUNCTION: DB2 UDB, base sys utilities, sqleIssueStartStop, probe:1100
DATA #1 : String, 46 bytes
/db2/db2ab7/sqllib/adm/db2rstop db2profile 0 0
DATA #2 : Hexdump, 4 bytes
0x0FFFFFFFFFFF9CBC : 0000 0020 ...
2008-02-21-08.11.12.694723-300 I14170006A410 LEVEL: Event
PID : 639410 TID : 1 PROC : db2stop
INSTANCE: db2ab7 NODE : 000
FUNCTION: DB2 UDB, base sys utilities, sqleIssueStartStop, probe:1100
DATA #1 : String, 46 bytes
/db2/db2ab7/sqllib/adm/db2rstop db2profile 1 1
DATA #2 : Hexdump, 4 bytes
0x0FFFFFFFFFFF9CBC : 0000 0020 ...
2008-02-21-08.11.13.115940-300 I14170417A410 LEVEL: Event
PID : 639410 TID : 1 PROC : db2stop
INSTANCE: db2ab7 NODE : 000
FUNCTION: DB2 UDB, base sys utilities, sqleIssueStartStop, probe:1100
DATA #1 : String, 46 bytes
/db2/db2ab7/sqllib/adm/db2rstop db2profile 2 0
DATA #2 : Hexdump, 4 bytes
0x0FFFFFFFFFFF9CBC : 0000 0020 ...
2008-02-21-08.11.13.632046-300 I14170828A410 LEVEL: Event
PID : 639410 TID : 1 PROC : db2stop
INSTANCE: db2ab7 NODE : 000
FUNCTION: DB2 UDB, base sys utilities, sqleIssueStartStop, probe:1100
DATA #1 : String, 46 bytes
/db2/db2ab7/sqllib/adm/db2rstop db2profile 3 1
DATA #2 : Hexdump, 4 bytes
0x0FFFFFFFFFFF9CBC : 0000 0020 ...
2008-02-21-08.11.14.577056-300 I14171239A418 LEVEL: Event
PID : 639410 TID : 1 PROC : db2stop
INSTANCE: db2ab7 NODE : 000
FUNCTION: DB2 UDB, base sys utilities, sqleIssueStartStop, probe:1100
DATA #1 : String, 54 bytes
/db2/db2ab7/sqllib/adm/db2rstop db2profile NODEACT 0 0
DATA #2 : Hexdump, 4 bytes
0x0FFFFFFFFFFF9CBC : 0000 0024 ...$
2008-02-21-08.11.15.004794-300 I14171658A418 LEVEL: Event
PID : 639410 TID : 1 PROC : db2stop
INSTANCE: db2ab7 NODE : 000
FUNCTION: DB2 UDB, base sys utilities, sqleIssueStartStop, probe:1100
DATA #1 : String, 54 bytes
/db2/db2ab7/sqllib/adm/db2rstop db2profile NODEACT 1 1
DATA #2 : Hexdump, 4 bytes
0x0FFFFFFFFFFF9CBC : 0000 0024 ...$
2008-02-21-08.11.15.425920-300 I14172077A418 LEVEL: Event
PID : 639410 TID : 1 PROC : db2stop
INSTANCE: db2ab7 NODE : 000
FUNCTION: DB2 UDB, base sys utilities, sqleIssueStartStop, probe:1100
DATA #1 : String, 54 bytes
/db2/db2ab7/sqllib/adm/db2rstop db2profile NODEACT 2 0
DATA #2 : Hexdump, 4 bytes
0x0FFFFFFFFFFF9CBC : 0000 0024 ...$
2008-02-21-08.11.15.941622-300 I14172496A418 LEVEL: Event
PID : 639410 TID : 1 PROC : db2stop
INSTANCE: db2ab7 NODE : 000
FUNCTION: DB2 UDB, base sys utilities, sqleIssueStartStop, probe:1100
DATA #1 : String, 54 bytes
/db2/db2ab7/sqllib/adm/db2rstop db2profile NODEACT 3 1
DATA #2 : Hexdump, 4 bytes
0x0FFFFFFFFFFF9CBC : 0000 0024 ...$
2008-02-21-08.11.17.002107-300 I14172915A422 LEVEL: Event
PID : 639412 TID : 1 PROC : db2start
INSTANCE: db2ab7 NODE : 000
FUNCTION: DB2 UDB, base sys utilities, sqleIssueStartStop, probe:1100
DATA #1 : String, 57 bytes
/db2/db2ab7/sqllib/adm/db2rstar db2profile SN ADDNODE 4 2
DATA #2 : Hexdump, 4 bytes
0x0FFFFFFFFFFF9C2C : 0000 0011 ....
2008-02-21-08.11.18.055723-300 E14173338A856 LEVEL: Warning
PID : 806940 TID : 1 PROC : db2star2
INSTANCE: db2ab7 NODE : 004
FUNCTION: DB2 UDB, license manager, sqllcRequestAccess, probe:1
MESSAGE : ADM12007E There are "80" day(s) left in the evaluation period for
the product "DB2 Enterprise Server Edition". For evaluation license
terms and conditions, refer to the IBM License Acceptance and License
Information document located in the license directory in the
installation path of this product. If you have licensed this product,
ensure the license key is properly registered. You can register the
license via the License Center or db2licm command line utility. The
license file can be obtained from your licensed product CD.
2008-02-21-08.11.18.296453-300 E14174195A1040 LEVEL: Event
PID : 806940 TID : 1 PROC : db2star2
INSTANCE: db2ab7 NODE : 004
FUNCTION: DB2 UDB, base sys utilities, DB2StartMain, probe:911
MESSAGE : ADM7513W Database manager has started.
START : DB2 DBM
DATA #1 : Build Level, 152 bytes
Instance "db2ab7" uses "64" bits and DB2 code release "SQL09012"
with level identifier "01030107".
Informational tokens are "DB2 v9.1.0.2", "special_17253", "U810940_17253", Fix Pack "2".
DATA #2 : System Info, 224 bytes
System: AIX sapaix08 3 5 00CCD7FE4C00
CPU: total:8 online:8 Threading degree per core:2
Physical Memory(MB): total:7744 free:5866
Virtual Memory(MB): total:32832 free:30943
Swap Memory(MB): total:25088 free:25077
Kernel Params: msgMaxMessageSize:4194304 msgMaxQueueSize:4194304
shmMax:68719476736 shmMin:1 shmIDs:131072
shmSegments:68719476736 semIDs:131072 semNumPerID:65535
semOps:1024 semMaxVal:32767 semAdjustOnExit:16384
2008-02-21-08.11.19.312894-300 I14175236A428 LEVEL: Error
PID : 835728 TID : 1 PROC : db2agent (instance) 4
INSTANCE: db2ab7 NODE : 004
APPHDL : 4-7 APPID: *LOCAL.db2ab7.080221131118
FUNCTION: DB2 UDB, base sys utilities, sqleGetAutomaticStorageDetails, probe:111111
DATA #1 : <preformatted>
dataSize 752 pMemAlloc 1110cdac0 sizeof(struct sqleAutoStorageCfg) 16
2008-02-21-08.11.19.346560-300 I14175665A497 LEVEL: Error
PID : 835728 TID : 1 PROC : db2agent (instance) 4
INSTANCE: db2ab7 NODE : 004
APPHDL : 4-7 APPID: *LOCAL.db2ab7.080221131118
FUNCTION: DB2 UDB, buffer pool services, sqlbInitStorageGroupFiles, probe:50
MESSAGE : ZRC=0x800201A5=-2147352155=SQLB_AS_INVALID_STORAGE_PATH
"Storage path does not exist or is inaccessible."
DATA #1 : String, 17 bytes
/db2/AB7/sapdata1
2008-02-21-08.11.19.349637-300 I14176163A619 LEVEL: Severe
PID : 835728 TID : 1 PROC : db2agent (instance) 4
INSTANCE: db2ab7 NODE : 004
APPHDL : 4-7 APPID: *LOCAL.db2ab7.080221131118
FUNCTION: DB2 UDB, buffer pool services, sqlbInitStorageGroupFiles, probe:50
MESSAGE : ZRC=0x800201A5=-2147352155=SQLB_AS_INVALID_STORAGE_PATH
"Storage path does not exist or is inaccessible."
DATA #1 : String, 46 bytes
Error during storage group file initialization
DATA #2 : Pointer, 8 bytes
0x0ffffffffffed006
DATA #3 : Pointer, 8 bytes
0x00000001110b3080
2008-02-21-08.11.19.355029-300 I14176783A435 LEVEL: Error
PID : 835728 TID : 1 PROC : db2agent (instance) 4
INSTANCE: db2ab7 NODE : 004
APPHDL : 4-7 APPID: *LOCAL.db2ab7.080221131118
FUNCTION: DB2 UDB, base sys utilities, sqleStartDb, probe:5
RETCODE : ZRC=0x800201A5=-2147352155=SQLB_AS_INVALID_STORAGE_PATH
"Storage path does not exist or is inaccessible."
2008-02-21-08.11.19.357831-300 I14177219A370 LEVEL: Warning
PID : 835728 TID : 1 PROC : db2agent (instance) 4
INSTANCE: db2ab7 NODE : 004
APPHDL : 4-7 APPID: *LOCAL.db2ab7.080221131118
FUNCTION: DB2 UDB, base sys utilities, sqle_remap_errors, probe:100
MESSAGE : ZRC 0x800201a5 remapped to SQLCODE -1051
2008-02-21-08.11.19.374857-300 I14177590A336 LEVEL: Severe
PID : 803022 TID : 1 PROC : db2sysc 4
INSTANCE: db2ab7 NODE : 004
FUNCTION: DB2 UDB, base sys utilities, sqleSysCtrlAddNode, probe:6
MESSAGE : ADD NODE failed with SQLCODE -1051 MESSAGE TOKEN /db2/AB7/sapdata1 in module SQLECRED
2008-02-21-08.11.19.381604-300 I14177927A440 LEVEL: Event
PID : 639412 TID : 1 PROC : db2start
INSTANCE: db2ab7 NODE : 000
FUNCTION: DB2 UDB, base sys utilities, sqleIssueStartStop, probe:1100
DATA #1 : String, 75 bytes
DB2NODE=4 DB2LPORT=2 /db2/db2ab7/sqllib/adm/db2rstop db2profile NODEACT 4 2
DATA #2 : Hexdump, 4 bytes
0x0FFFFFFFFFFF9C2C : 0000 0024 ...$
2008-02-21-08.11.20.255191-300 I14178368A287 LEVEL: Event
PID : 700804 TID : 1 PROC : db2stop2
INSTANCE: db2ab7 NODE : 004
FUNCTION: DB2 UDB, base sys utilities, DB2StopMain, probe:240
DATA #1 : String, 26 bytes
Stop phase is in progress.
2008-02-21-08.11.20.258575-300 I14178656A302 LEVEL: Event
PID : 700804 TID : 1 PROC : db2stop2
INSTANCE: db2ab7 NODE : 004
FUNCTION: DB2 UDB, base sys utilities, DB2StopMain, probe:250
DATA #1 : String, 41 bytes
Requesting system controller termination.
2008-02-21-08.11.20.265164-300 I14178959A403 LEVEL: Warning
PID : 803022 TID : 1 PROC : db2sysc 4
INSTANCE: db2ab7 NODE : 004
FUNCTION: DB2 UDB, routine_infrastructure, sqlerKillAllFmps, probe:5
MESSAGE : Bringing down all db2fmp processes as part of db2stop
DATA #1 : Hexdump, 4 bytes
0x0FFFFFFFFFFFE400 : 0000 0000 ....
2008-02-21-08.11.20.271570-300 I14179363A304 LEVEL: Event
PID : 700804 TID : 1 PROC : db2stop2
INSTANCE: db2ab7 NODE : 004
FUNCTION: DB2 UDB, base sys utilities, DB2StopMain, probe:260
DATA #1 : String, 43 bytes
System controller termination is completed.
2008-02-21-08.11.20.276550-300 I14179668A381 LEVEL: Event
PID : 700804 TID : 1 PROC : db2stop2
INSTANCE: db2ab7 NODE : 004
FUNCTION: DB2 UDB, base sys utilities, DB2StopMain, probe:280
DATA #1 : String, 24 bytes
There is no active EDUs.
DATA #2 : Hexdump, 4 bytes
0x0FFFFFFFFFFFCEE0 : 0000 0000 ....
2008-02-21-08.11.20.312260-300 I14180050A342 LEVEL: Severe
PID : 774176 TID : 1 PROC : db2acd 4
INSTANCE: db2ab7 NODE : 004
FUNCTION: DB2 UDB, routine_infrastructure, sqlerFmpOneTimeInit, probe:100
DATA #1 : Hexdump, 4 bytes
0x0FFFFFFFFFFFF5A4 : FFFF FBEE ....
2008-02-21-08.11.20.474332-300 E14180393A301 LEVEL: Event
PID : 700804 TID : 1 PROC : db2stop2
INSTANCE: db2ab7 NODE : 004
FUNCTION: DB2 UDB, base sys utilities, DB2StopMain, probe:911
MESSAGE : ADM7514W Database manager has stopped.
STOP : DB2 DBM
2008-02-21-08.11.20.600512-300 I14180695A422 LEVEL: Event
PID : 671870 TID : 1 PROC : db2start
INSTANCE: db2ab7 NODE : 000
FUNCTION: DB2 UDB, base sys utilities, sqleIssueStartStop, probe:1100
DATA #1 : String, 57 bytes
/db2/db2ab7/sqllib/adm/db2rstar db2profile SN ADDNODE 5 3
DATA #2 : Hexdump, 4 bytes
0x0FFFFFFFFFFF9C2C : 0000 0011 ....
2008-02-21-08.11.21.620771-300 E14181118A856 LEVEL: Warning
PID : 819454 TID : 1 PROC : db2star2
INSTANCE: db2ab7 NODE : 005
FUNCTION: DB2 UDB, license manager, sqllcRequestAccess, probe:1
MESSAGE : ADM12007E There are "80" day(s) left in the evaluation period for
the product "DB2 Enterprise Server Edition". For evaluation license
terms and conditions, refer to the IBM License Acceptance and License
Information document located in the license directory in the
installation path of this product. If you have licensed this product,
ensure the license key is properly registered. You can register the
license via the License Center or db2licm command line utility. The
license file can be obtained from your licensed product CD.
2008-02-21-08.11.21.839933-300 E14181975A1040 LEVEL: Event
PID : 819454 TID : 1 PROC : db2star2
INSTANCE: db2ab7 NODE : 005
FUNCTION: DB2 UDB, base sys utilities, DB2StartMain, probe:911
MESSAGE : ADM7513W Database manager has started.
START : DB2 DBM
DATA #1 : Build Level, 152 bytes
Instance "db2ab7" uses "64" bits and DB2 code release "SQL09012"
with level identifier "01030107".
Informational tokens are "DB2 v9.1.0.2", "special_17253", "U810940_17253", Fix Pack "2".
DATA #2 : System Info, 224 bytes
System: AIX sapaix08 3 5 00CCD7FE4C00
CPU: total:8 online:8 Threading degree per core:2
Physical Memory(MB): total:7744 free:5859
Virtual Memory(MB): total:32832 free:30936
Swap Memory(MB): total:25088 free:25077
Kernel Params: msgMaxMessageSize:4194304 msgMaxQueueSize:4194304
shmMax:68719476736 shmMin:1 shmIDs:131072
shmSegments:68719476736 semIDs:131072 semNumPerID:65535
semOps:1024 semMaxVal:32767 semAdjustOnExit:16384
2008-02-21-08.11.22.860106-300 I14183016A428 LEVEL: Error
PID : 37336 TID : 1 PROC : db2agent (instance) 5
INSTANCE: db2ab7 NODE : 005
APPHDL : 5-7 APPID: *LOCAL.db2ab7.080221131121
FUNCTION: DB2 UDB, base sys utilities, sqleGetAutomaticStorageDetails, probe:111111
DATA #1 : <preformatted>
dataSize 752 pMemAlloc 11099bac0 sizeof(struct sqleAutoStorageCfg) 16
2008-02-21-08.11.22.886670-300 I14183445A497 LEVEL: Error
PID : 37336 TID : 1 PROC : db2agent (instance) 5
INSTANCE: db2ab7 NODE : 005
APPHDL : 5-7 APPID: *LOCAL.db2ab7.080221131121
FUNCTION: DB2 UDB, buffer pool services, sqlbInitStorageGroupFiles, probe:50
MESSAGE : ZRC=0x800201A5=-2147352155=SQLB_AS_INVALID_STORAGE_PATH
"Storage path does not exist or is inaccessible."
DATA #1 : String, 17 bytes
/db2/AB7/sapdata1
2008-02-21-08.11.22.889226-300 I14183943A619 LEVEL: Severe
PID : 37336 TID : 1 PROC : db2agent (instance) 5
INSTANCE: db2ab7 NODE : 005
APPHDL : 5-7 APPID: *LOCAL.db2ab7.080221131121
FUNCTION: DB2 UDB, buffer pool services, sqlbInitStorageGroupFiles, probe:50
MESSAGE : ZRC=0x800201A5=-2147352155=SQLB_AS_INVALID_STORAGE_PATH
"Storage path does not exist or is inaccessible."
DATA #1 : String, 46 bytes
Error during storage group file initialization
DATA #2 : Pointer, 8 bytes
0x0ffffffffffed006
DATA #3 : Pointer, 8 bytes
0x0000000110981080
2008-02-21-08.11.22.894826-300 I14184563A435 LEVEL: Error
PID : 37336 TID : 1 PROC : db2agent (instance) 5
INSTANCE: db2ab7 NODE : 005
APPHDL : 5-7 APPID: *LOCAL.db2ab7.080221131121
FUNCTION: DB2 UDB, base sys utilities, sqleStartDb, probe:5
RETCODE : ZRC=0x800201A5=-2147352155=SQLB_AS_INVALID_STORAGE_PATH
"Storage path does not exist or is inaccessible."
2008-02-21-08.11.22.897320-300 I14184999A370 LEVEL: Warning
PID : 37336 TID : 1 PROC : db2agent (instance) 5
INSTANCE: db2ab7 NODE : 005
APPHDL : 5-7 APPID: *LOCAL.db2ab7.080221131121
FUNCTION: DB2 UDB, base sys utilities, sqle_remap_errors, probe:100
MESSAGE : ZRC 0x800201a5 remapped to SQLCODE -1051
2008-02-21-08.11.22.913142-300 I14185370A336 LEVEL: Severe
PID : 758092 TID : 1 PROC : db2sysc 5
INSTANCE: db2ab7 NODE : 005
FUNCTION: DB2 UDB, base sys utilities, sqleSysCtrlAddNode, probe:6
MESSAGE : ADD NODE failed with SQLCODE -1051 MESSAGE TOKEN /db2/AB7/sapdata1 in module SQLECRED
2008-02-21-08.11.22.918953-300 I14185707A440 LEVEL: Event
PID : 671870 TID : 1 PROC : db2start
INSTANCE: db2ab7 NODE : 000
FUNCTION: DB2 UDB, base sys utilities, sqleIssueStartStop, probe:1100
DATA #1 : String, 75 bytes
DB2NODE=5 DB2LPORT=3 /db2/db2ab7/sqllib/adm/db2rstop db2profile NODEACT 5 3
DATA #2 : Hexdump, 4 bytes
0x0FFFFFFFFFFF9C2C : 0000 0024 ...$
2008-02-21-08.11.23.793386-300 I14186148A287 LEVEL: Event
PID : 823654 TID : 1 PROC : db2stop2
INSTANCE: db2ab7 NODE : 005
FUNCTION: DB2 UDB, base sys utilities, DB2StopMain, probe:240
DATA #1 : String, 26 bytes
Stop phase is in progress.
2008-02-21-08.11.23.796267-300 I14186436A302 LEVEL: Event
PID : 823654 TID : 1 PROC : db2stop2
INSTANCE: db2ab7 NODE : 005
FUNCTION: DB2 UDB, base sys utilities, DB2StopMain, probe:250
DATA #1 : String, 41 bytes
Requesting system controller termination.
2008-02-21-08.11.23.802154-300 I14186739A403 LEVEL: Warning
PID : 758092 TID : 1 PROC : db2sysc 5
INSTANCE: db2ab7 NODE : 005
FUNCTION: DB2 UDB, routine_infrastructure, sqlerKillAllFmps, probe:5
MESSAGE : Bringing down all db2fmp processes as part of db2stop
DATA #1 : Hexdump, 4 bytes
0x0FFFFFFFFFFFE400 : 0000 0000 ....
2008-02-21-08.11.23.808100-300 I14187143A304 LEVEL: Event
PID : 823654 TID : 1 PROC : db2stop2
INSTANCE: db2ab7 NODE : 005
FUNCTION: DB2 UDB, base sys utilities, DB2StopMain, probe:260
DATA #1 : String, 43 bytes
System controller termination is completed.
2008-02-21-08.11.23.812951-300 I14187448A381 LEVEL: Event
PID : 823654 TID : 1 PROC : db2stop2
INSTANCE: db2ab7 NODE : 005
FUNCTION: DB2 UDB, base sys utilities, DB2StopMain, probe:280
DATA #1 : String, 24 bytes
There is no active EDUs.
DATA #2 : Hexdump, 4 bytes
0x0FFFFFFFFFFFCEE0 : 0000 0000 ....
2008-02-21-08.11.23.882148-300 I14187830A342 LEVEL: Severe
PID : 684418 TID : 1 PROC : db2acd 5
INSTANCE: db2ab7 NODE : 005
FUNCTION: DB2 UDB, routine_infrastructure, sqlerFmpOneTimeInit, probe:100
DATA #1 : Hexdump, 4 bytes
0x0FFFFFFFFFFFF5A4 : FFFF FBEE ....
2008-02-21-08.11.24.008936-300 E14188173A301 LEVEL: Event
PID : 823654 TID : 1 PROC : db2stop2
INSTANCE: db2ab7 NODE : 005
FUNCTION: DB2 UDB, base sys utilities, DB2StopMain, probe:911
MESSAGE : ADM7514W Database manager has stopped.
STOP : DB2 DBM
2008-02-21-08.41.01.094426-300 I14188475A371 LEVEL: Warning
PID : 741576 TID : 1 PROC : db2bp
INSTANCE: db2ab7 NODE : 002
FUNCTION: DB2 UDB, Connection Manager, sqleUCappImpConnect, probe:150
RETCODE : ZRC=0x8005006D=-2147155859=SQLE_CA_BUILT
"SQLCA has been built and saved in component specific control block."
2008-02-21-08.41.01.109657-300 I14188847A371 LEVEL: Warning
PID : 741576 TID : 1 PROC : db2bp
INSTANCE: db2ab7 NODE : 002
FUNCTION: DB2 UDB, Connection Manager, sqleUCappImpConnect, probe:150
RETCODE : ZRC=0x8005006D=-2147155859=SQLE_CA_BUILT
"SQLCA has been built and saved in component specific control block."
2008-02-21-08.41.01.115152-300 I14189219A371 LEVEL: Warning
PID : 741576 TID : 1 PROC : db2bp
INSTANCE: db2ab7 NODE : 002
FUNCTION: DB2 UDB, Connection Manager, sqleUCappImpConnect, probe:150
RETCODE : ZRC=0x8005006D=-2147155859=SQLE_CA_BUILT
"SQLCA has been built and saved in component specific control block." -
Hi BW experts,
Can anyone explain the steps to create logical partitioning of Cube data?
I want to do it per fiscal year. If we create 05 cubes(Same structure) each for say fiscal year 04' 05'...08'
now how will the data flow into respective cubes.
Kindly explain the steps.
thanks,
Nipun Sharma
P.S. points will be rewarded.Hi Nipun,
1.Partitioning the cube in both physical and logical way helps you to increase its performance on Queriying.
Partitioning InfoCubes Using the Characteristic 0FISCPER (At Infocube maintainance)
Prerequisites
When partitioning using 0FISCPER values, values are calculated within the partitioning interval that you specified in the InfoCube maintenance. To do this, the value for 0FISCVARNT must be known at the time of partitioning; it must be set to constant.
Procedure
1. The InfoCube maintenance is displayed. Set the value for the 0FISCVARNT characteristic to constant. Carry out the following steps:
a. Choose the Time Characteristics tab page.
b. In the context menu of the dimension folder, choose Object specific InfoObject properties.
c. Specify a constant for the characteristic 0FISCVARNT. Choose Continue.
2. Choose Extras -->DB Performance --> Partitioning. The Determine Partitioning Conditions dialog box appears. You can now select the 0FISCPER characteristic under Slctn. Choose Continue.
3. The Value Range (Partitioning Condition) dialog box appears. Enter the required data.
Pls chk the below link for partioning..
http://help.sap.com/saphelp_bw33/helpdata/en/33/dc2038aa3bcd23e10000009b38f8cf/content.htm
http://help.sap.com/saphelp_nw2004s/helpdata/en/0a/cd6e3a30aac013e10000000a114084/frameset.htm
Re: logical Vs physical partitions ?
Regarding Partitioning
https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
/message/4204952#4204952 [original link is broken]
How i can partioning the BW?
Partioning and ETable
What is the use of cube partition?
*pls assign points,if info is useful*
Regards
CSM reddy -
Any real reason for logical partitioning over physical?
Hi!
I have seen a number of scenarios where SAP BI (assuming BI 7.0 for the rest of the discussion), running in high volume scenarios, have been cluttered by a lot of logically partitioned cubes joined by multi providers....
Obviously the disadvantage of using logical partitions is that it increases maintenance efforts: you need a new update rule for each logical partition (cube) , then you need to manually add/delte cubes from the multiprovider, filtering data in the update rules to reach the correct cube based on time characteristic etc etc...
I have seen one clear advantage which is the parallelization of queries run against a multiprovider - assuming you wan't to all underlying cubes ... but are there any other advantages which overcome the mainenance overhead?
For me it feels like using physical database partitions in the same cube would be the correct decision in 90% of the cases. It seems to me that the underlying RDBMS should be able to handle itself to:
1) Parallellize a query over several physical partitions if needed.
2) Be smart enough to only query the needed partition if the query is restricted based on the partitioning characteristic.
Please correct me anyone? - When is logical partitions really motivated?
Best regards,
Christian
Edited by: Christian on May 15, 2008 3:55 PMHi,
This is a great question. Generally it is very difficult to understand the real motivation for the physical partioning - multiple cubes. You are right, it definitely increases the maintenance overhead. And you have already pointed out both the advantages and disadvantages.
Physical Partitioning is more useful where we have huge amounts of data. Imagine a cube with 3 or 4 GB of data - which are not usual - but possible. The Table Partioning is useful with small infocubes, less than 1 GB. With bigger Infocubes, Table level partitioning may not provide the required level of performance. If we have too many small partitions, that would also reduce the perfomance. If we have too few partitions, the query performance will not be as much as we want. In this scenario, we can use Physical partitioning (Multiple Cubes) combined with Table Level Partitioning to achieve the required performance levels. On top, we can even think of using Aggregates for further betterment of the performance.
While all the above seems to be relevant for older versions of BW (upto 3.5), BI 7.0 has the BIA (BI Accelerator), which works on the Blade Server with all the data cached directly on the main memory. I am not sure how much this would impact the data modeling - I have not started working on the BIA as yet.
rgds
naga -
Impact of logical partitioning on BIA
We are on BI release 701, SP 5. We are planning to create logical partition for some of the infocubes in our system. These cubes are already BIA enabled, so will creation of logical indexes have any impact on the BIA or improve the BIA rollup runtime
Hi Leonel,
Logical partitioning will have impact on BIA in terms of performance .
Current cube is already indexed on BIA. Now if you divide current cube data into different cubes and create multiprovider on it then each cube will have its own F table index on BIA.
you have to put new cubes to BIA and Execute Initial filling step on BIA and place rollups for respective cubes.
Point to be noted :
As Data will be deleted from Current cube and move to other cubes , Indexes will not get deleted
from corresponding F table indexes in BIA . There will be indexes for records which are not present in cube. Thus it is always a good practice to flush BIA which will remove all the indexes from BIA for current cube and create new Indexes on BIA.
In this case , we will have consistent indexes on BIA which will not hamper performance.
This will also improve rollup time as data will be less in each cube after logical partitioning. For rollup
improvement time , we can implement Delta indexing on BIA as well.
Question : Why do we want to create logical partitioning for cubes which are present on BIA as queries will never come to Cubes in BI system ?
Regards,
Kishanlal Kumawat. -
Hi Gurus
I need to do the lofical partioning of the cube, by Financial year . I have 3-4 year data in the cube and looking to create a separate cubes for every 2 years .
On top of that i am looking for one cube to have current fiscal year data only, as most of the reporting is done this
My concern is if i create separate cubes for separate years, so i do i need to create a new cube every year.
Kindly Guide me how to do this activity and approch to be used.
Thanks
Dheeraj>
dheeraj dua wrote:
>
> My concern is if i create separate cubes for separate years, so i do i need to create a new cube every year.
>
> Kindly Guide me how to do this activity and approch to be used.
>
> Thanks
> Dheeraj
Yes, you would have to create separate cubes every year based on requirement and put it in the multiprovider, if you want to utilize logical partitioning.
Also, it would be better to remove the old(history) info cubes which are not reported currently from the multiprovider. -
Logical Partitioning Infoprovider
Hi Experts
Could anyone tell me how to create multiple Infocubes on the top of one particular Infosource such that say one infocube contains data for Europe ,the other for Asia,the other for Americas and So on.That is they should be partitioned by region.
Points will be Assigned
Regards
AnkHi......
U can do it by following Method
1. To partition a Info cube go to info cube manage screen
2. In EXTRAS --> Partitioning
3.Activate
Condition : Cube should not have data while partitioning
E.g if u have data for month specific months jan , Feb, Mar....
And u r adding monthwise data ,Suppose ur in Feb n u have added missing data for Jan month.this record get added after Feb records in internal table.
Also check this :
Only one InfoProvider? Using Multiprovider you could have lower TCO due to Logical Partitioning
Hope this helps you.....
Regards,
Debjani.......
Edited by: Debjani Mukherjee on Oct 10, 2008 11:01 AM -
Data in ODS, Info cube ans Multiprovider(List cube) are in Sync.
Hi,
My query is built on multiprovider. The data flow is data source u2013 ODS then ODS u2013 Info cube and multiprovider contains Info cube only.
Data in ODS, Info cube ans Multiprovider(List cube) are in Sync.
The query results are not tie up ODS, Info cube ans Multiprovider(List cube).
Any one let me know why this is happening and how do I resolve it.
Regards,
Sharma.HI;
thanks for help.
I resolved the issue in my own.
Regards,
Sharma. -
ORA-39126 during an export of a partition via dbms_datapump
Hi ,
i did export using datapump in command line everything went fine but while exporting via dbms_datapump i got this:
ORA-39126 during an export of a partition via dbms_datapump
ORA-00920
'SELECT FROM DUAL WHERE :1' P20060401
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPW$WORKER", line 6228
the procedure is:
PROCEDURE pr_depura_bitacora
IS
l_job_handle NUMBER;
l_job_state VARCHAR2(30);
l_partition VARCHAR2(30);
v_sql VARCHAR2(2000);
BEGIN
-- Create a user-named Data Pump job to do a "table:partition-level" export
-- Local
select 'P'|| to_char((select min(STP_LOG_DATE) from SAI_AUDITBITACORA),'YYYYMM')||'01'
into l_partition
from user_tab_partitions
where table_name = 'SAI_AUDITBITACORA'
and rownum = 1;
l_partition := rtrim (l_partition,' ');
l_job_handle:= DBMS_DATAPUMP.OPEN
operation=>'EXPORT',
job_mode =>'TABLE',
job_name =>'EXPORT_ORACLENSSA'
-- Schema filter
DBMS_DATAPUMP.METADATA_FILTER
handle => l_job_handle,
name => 'SCHEMA_EXPR',
value => 'IN (''ORACLENSSA'')'
DBMS_OUTPUT.PUT_LINE('Added filter for schema list');
-- Table filter
DBMS_DATAPUMP.METADATA_FILTER
handle => l_job_handle,
name => 'NAME_EXPR',
value => '=''SAI_AUDITBITACORA'''
DBMS_OUTPUT.PUT_LINE('Added filter for table expression');
-- Partition filter
DBMS_DATAPUMP.DATA_FILTER
handle => l_job_handle,
name => 'PARTITION_EXPR',
value => l_partition,
table_name => 'SAI_AUDITBITACORA'
DBMS_OUTPUT.PUT_LINE('Partition filter for schema list');
DBMS_DATAPUMP.ADD_FILE
handle => l_job_handle,
filename => 'EXP'||l_partition||'.DMP',
directory => 'EXP_DATA_PUMP',
filetype => 1
DBMS_DATAPUMP.ADD_FILE
handle => l_job_handle,
filename => 'EXP'||l_partition||'.LOG',
directory => 'EXP_DATA_PUMP',
filetype => 3
DBMS_DATAPUMP.START_JOB
handle => l_job_handle,
skip_current => 0
DBMS_DATAPUMP.WAIT_FOR_JOB
handle => l_job_handle,
job_state => l_job_state
DBMS_OUTPUT.PUT_LINE('Job completed - job state = '||l_job_state);
DBMS_DATAPUMP.DETACH(handle=>l_job_handle);
END;
I've already drop and recreate the directory, granted read, write to public and to user, grant create session, create table, create procedure, exp_full_database to user, restart the database and the listener with the var LD_LIBRARY pointing first to $ORACLE_HOME/lib, and add more space to temporary tablespace.The basic problem is:
Error: ORA 920
Text: invalid relational operator
Cause: A search condition was entered with an invalid or missing relational
operator.
Action: Include a valid relational operator such as =, !=, ^=, <>, >, <, >=, <=
, ALL, ANY, [NOT] BETWEEN, EXISTS, [NOT] IN, IS [NOT] NULL, or [NOT]
LIKE in the condition.
Obviously this refers to the invalid statement 'SELECT FROM DUAL ...'. I also recommend, you should contact Oracle Support, because it happens inside an Oracle provided package.
Werner
Maybe you are looking for
-
Event handling... inner classes or not? what do you recomend?
Me question is should you use JButton addbutton=new JButton("Add"); addbutton.addNewActionListener(new ActionListener() public void actionPerformed(ActionEvent e) or this class Window implements ActionListener public void actionPerformed(ActionEvent
-
Hi, We are on ECC6 with costing based COPA and my clients is doing reconciliation of gross profit from FI report and KE30 report. We are having variance in both the reports. 1. Is there any report to find out the the list of GR done but billing not
-
Business rule + compare date + problem in jspx
Hi all , i am useing Jdeveloper11.1.1.3 . i need to compare birthDate and employmentDate. birthDate must be greater than employmentDate at least 15 years. there is a business rule method in entity object impl file : * Validation method for InsuredPer
-
Installation XE database install
Hi all, i have problem in installing XE 10g database this error appears in dialog at the end of installation: 1608: unable to create installdriver instance, return code -2147221021 i searched for this error in google but i can not find anything relat
-
My Facebook app closes as soon as I try to launch it. Help!