Partitioning Fact Table

Hi
In our data warehouse we have a Fact table which has grown quite a large in size around 17 million records. We have been pondering on the options to partition this table.
Unfortunately this fact table does not have any date columns , all columns are surrogate keys of dimensions, even for time dimensions. My idea is to partition by range by manually specifying the surrogate key ranges of time dimension. Is that the way one proced with?
Other option is to, add a new column to the Fact Table and populate with the corresponding "ddmmyyyy" in number format and use that one to partition the table. If I go with this approach, if in my reports / queries if I use the dimension key to join between the Fact and Dimension, will oracle still identify which partition to look for? Or do I MUST use the partitioned column in the queries for partition pruning to be effective.
Any thoughts would be useful.
Regards
Mahesh

if in my reports / queries if I use the dimension key to join between the Fact and Dimension, will oracle still identify which partition to look for? No the oracle will not use the Partition Prunning in this case.
Or do I MUST use the partitioned column in the queries for partition pruning to be effective.yes , you need to use the partitioned column for partition pruning to be effective.
Cheers
Nawneet

Similar Messages

  • PL/SQL- Problem in creating a partitioned fact table using select as syntax

    Hi All,
    I am trying to create a clone(mdccma.fact_pax_bkng_t) of existing fact table (mdccma.fact_pax_bkng) using dynamic pl/sql. However, pl/sql anonymous block errors out with following error:
    SQL> Connected.
    SQL> SQL> DECLARE
    ERROR at line 1:
    ORA-00911: invalid character
    ORA-06512: at "SYS.DBMS_SYS_SQL", line 1608
    ORA-06512: at "SYS.DBMS_SQL", line 33
    ORA-06512: at line 50
    Here is pl/sql block:
    -- CREATING FPB_T
    DECLARE
    v_owner VARCHAR2(32) := 'MDCCMA';
    v_table_original VARCHAR2(32) := 'FACT_PAX_BKNG';
    v_table VARCHAR2(32) := 'FACT_PAX_BKNG_T';
    v_tblspc VARCHAR2(32) := v_owner||'_DATA';
    CURSOR c_parts IS SELECT TABLESPACE_NAME, PARTITION_NAME,HIGH_VALUE, ROW_NUMBER() OVER (ORDER BY PARTITION_NAME) AS ROWNUMBER
    FROM USER_TAB_PARTITIONS
    WHERE TABLE_NAME = v_table_original
    ORDER BY PARTITION_NAME;
    v_cmd CLOB := EMPTY_CLOB();
    v_cmd3 varchar2(300) := 'CREATE TABLE ' ||v_owner||'.'||v_table||' TABLESPACE '||v_tblspc
    ||' NOLOGGING PARTITION BY RANGE'||'(' ||'SNAPSHOT_DTM '||')' ||'(';
    v_part VARCHAR2(32);
    v_tblspc_name VARCHAR2(32);
    v_row number;
    v_value LONG;
    v_tmp varchar2(20000);
    v_cur INTEGER;
    v_ret NUMBER;
    v_sql DBMS_SQL.VARCHAR2S;
    v_upperbound NUMBER;
    BEGIN
    v_cmd := v_cmd3;
    OPEN c_parts;
    FETCH c_parts INTO v_tblspc_name, v_part,v_value, v_row;
    WHILE c_parts%FOUND
    LOOP
    IF (v_row = 1) THEN
    v_tmp := ' PARTITION '||v_part||' VALUES LESS THAN ' ||'('|| v_value||')'||' NOLOGGING TABLESPACE '||v_tblspc_name;
    ELSE
    v_tmp := ', PARTITION '||v_part||' VALUES LESS THAN ' ||'('|| v_value||')'||' NOLOGGING TABLESPACE '||v_tblspc_name;
    END IF;
    v_cmd := v_cmd || v_tmp;
    -- DBMS_OUTPUT.PUT_LINE(v_cmd);
    FETCH c_parts INTO v_tblspc_name, v_part,v_value, v_row;
    END LOOP;
    -- DBMS_OUTPUT.PUT_LINE('Length:'||DBMS_LOB.GETLENGTH(v_cmd));
    v_cmd := v_cmd||')'||' AS SELECT ' || '*'||' FROM ' || v_owner||'.'|| v_table_original ||' WHERE '||'1'||'='||'2'||';';
    v_upperbound := CEIL(DBMS_LOB.GETLENGTH(v_cmd)/256);
    FOR i IN 1..v_upperbound
    LOOP
    v_sql(i) := DBMS_LOB.SUBSTR(v_cmd
    ,256 -- amount
    ,((i-1)*256)+1 -- offset
    END LOOP;
    v_cur := DBMS_SQL.OPEN_CURSOR;
    DBMS_SQL.PARSE(v_cur, v_sql, 1, v_upperbound, FALSE, DBMS_SQL.NATIVE);
    v_ret := DBMS_SQL.EXECUTE(v_cur);
    CLOSE c_parts;
    DBMS_OUTPUT.PUT_LINE(v_cmd);
    -- EXECUTE IMMEDIATE v_cmd ;
    END;
    The above pl/sql creates a DDL for partitioned fact table(new) based on an existing fact table and get executes through CLOB.
    Please look into the issue and let me know any changes or modifications/suggestions that are required to fix the issue. Any help is appreciated.
    Thank You,
    Sudheer

    Think this is your problem:
    v_cmd := v_cmd||')'||' AS SELECT ' || '*'||' FROM ' || v_owner||'.'|| v_table_original ||' WHERE '||'1'||'='||'2'||';';Remove the SQL terminator ';' ... dynamic SQL doesn't require it, try this instead:
    v_cmd := v_cmd||')'||' AS SELECT ' || '*'||' FROM ' || v_owner||'.'|| v_table_original ||' WHERE '||'1'||'='||'2';Thanks
    Paul

  • Partitioning Fact Tables -- experiences, notes, documentation

    I have gone through section 3.8 of the OBIA Installation and Configuration Guide -- "Partitioning Guidelines for Large Fact Tables".
    Frankly, I find that documentation inadequate and using a poor example.
    I am looking at partitioning W_GL_BALANCE_F . In this table, BALANCE_DT_WID seems to be a Partitioning Key. With 24 months data and only Month-End balances I have only 24 distinct keys. Therefore, this would be a LIST PARTITIONING Key.
    I can and have rebuilt the table as a partitioned table. And am proceeding with the DAC changes as per the documentation. However, I am looking for real world implementations, documentations, notes, experiences.
    Hemant K Chitale

    Thanks.
    Information like BUs, Companies, Ledgers etc from the source Financials systems are Dimensions when extracted. So they go into W_INT_ORG_D and W_LEDGER_D (for example) and the ROW_WIDs generated for the ORG_NAME and LEDGER_NAME is the join key to the Fact table (W_GL_BALANCE_F). So these might be partition keys but we'd have to identify the generated values (ROW_IDs becoming COMPANY_ORG_WID and LEDGER_WID) before defining the partition keys. That can be done only after the data is loaded ?.
    How did you partition by BU ?
    Hemant K Chitale

  • Experiences of Partitioning FACT tables

    Running BPC 7.0 SP3 for MS
    We have two very large FACT tables (195milliion records and 105million records) and these are currently growing at a rate of 2m/5m records per month - we are running an incremental optimize twice per day
    It has been suggested that we consider partioning the tables to improve performance, but I have not been able to find any users/customers with any experience of doing this
    Specifically
    1. Does it improve performance?
    2. What additional complexity does it add to regular maintenance ?
    3. Have there been any problems encountered implementing Partioned tables?
    4. It would seem that partioning based on time would make sense - historic data in one partition, current data in another HOWEVER many of our reports pull current year and prior year so will this cause a reporting issue? Or degrade report performance?

    I don't know if this is still an issue for you.  You ask about Fact Table partitioning specifically, but you need to be aware that it is possible to partition either the FACT tables or the Fact table partition of the cube, or both. We have used (further) partioning of Fact table partition in the cube with success, and it sounds as if this is what you are really asking about. 
    The impacts are on
    1. processing time, a full optimize without Compress only processes the paritions that have changed, thereby reducing the run time where there is a lot of unchanged data. You mention that you run incremental updates twice daily,  this is currently reprocessing the whole database.  I would have expected the lite optimize to be more effective, supported by an overnight full optimize, if you have an overnight window. You can also run the lite optimize more frequently.
    2. query time. The filters defined in the partitions provide a more efficient path to data in the reporting processes than the defaults, which have the potential to scan large parts of the database.
    Partitioning is not a panacea. You need to be specific about the areas of performance problem that you have and choose the performance improvement strategy to address these.  Looking at the indexing of the database is also an area where you can improve performance significantly.
    If you partition the cube, it is transparent to the usage of the application, from both user and admin perspective. The greatest complexity comes is the definition of the partitions in the first place, but this is a normal DBA function.  The trick is ensure that the filter statements do not overlap, otherwise you might get a value duplicated in 2 partitions, and to define a catchall partition to include anything not included in specific partitions. You should expect to revist the partitioning from time to time.  It is quite straightforward to repartition, you are not doing anything to the underlying data in the FACT tables
    Time is a common dimension to partition and you may partition at different levels of granularity for different periods, e.g. current year by qtr or month, prior and future years by year.  This reflects where the most frequent updates will be.  It is also possible to define partitions based on combinations of dimensions, we use category and time, so that currenct year actuals has the most granular partitions and all historic years budgets go into a single partition.

  • SSAS Tabular - Vertically partitioned Fact Table Relationships

    Hi All, I have a very wide (300 column) fact table and hence it is partitioned into 2 fact tables vertically with the common key in both tables for relation (1:1). Both the tables have some measures and degenerate dimensions to be used for analyisis.
    How should the data model look like? I am stuck because the bi-directional relationships doesnt seem to be supported and only one direction works when degenerate dimensions are used.
    Also, these two fact tables are related to some common dimensions and when i connect all of them together, some of the relationships get deactivated due to circular naure of relationships. Using USERELATIONSHIPS is going to be very timeconsuming considering
    the number of measures.
    What is the guidance around modelling vertically partitoned fact tables? Do I combine both the tables into one wide table using a View or Query? Any help appreciated.
    Thanks, Ashish Singh

    Hi Thomas, Unfortunately I do not have any control over the DW design and the client says that they require most of the columns for analysis. I do not want to combine the fact tables into a wider table but stuck with the relationship design in Tabular mode.
    1) Bi-directional relationships are not supported. So, if both the fact tables have degenerate dimensions on which the mesaures have to be analyzed, it cannot work because the relationships are unidirectional in tabular mode.
    2) Relationships get deactivated. Since both the tables are connected to each other using a KEY column, any attempt to connect both the tables to some common dimension results in INACTIVE relationships which needs either using USERELATIONSHIP or importing
    the same table again with a different alias which complicates the model.
    3) Inferenced relationships cant be modelled. If Fact1 is connected to a dimension, the measures of Fact2 cannot be analyzed on that dimension even though Fact1 and Fact 2 are connected using a 1:1 relationship.
    All these design challenges make me lean towards using a SQL View and joining the 2 Fact tables in the Tabular model so that I could provide the users with the analysis they require using the current DB structure. Is there a different approach to achieve?
    Any better ideas would be heplful.
    Thanks, Ashish Singh

  • Adding a cloumn in a fact table which has pctfree 0

    Hi,
    I have a partitioned fact table with millions of rows and pctfree 0. Now I want to add a new column in this table (without any default value in the new column, so new column is a null). How will it affect (because of pctfree 0) the performance of old data retrieval for reports.
    Thanks,
    Ravi

    Hello,
    If this is Berkeley DB what is the platform and version? If not this is not the
    right forum. You might try the SQL and PL/SQL forum at:
    PL/SQL
    Thank you,
    Sandra

  • Urgent regarding E & F fact table

    Hi all,
    How and where we find E & F partition fact table having size larger than 30.
    It's very urgent.
    Thanks & Regards,
    Priya.

    Hi,
    You can find the table related to InfoCube by following the below mention naming convention.
    <b>/BI<C OR DIGIT>/<TABLE CODE><INFOCUBE><DIMENSION>
    <C or digit>: C = Customer-defined InfoCube
    Digit = SAP-defined InfoCube
    <table code>: D = Dimension table
    E = Compressed fact table
    F = Uncompressed fact table
    <InfoCube>: The name of the InfoCube without leading digits (if any)
    <dimension>: (only used for dimension tables)
    P = Package dimension
    U = Unit dimension
    T = Time dimension
    0-9, A, B, C = User-defined dimension tables</b>
    And you can find info about the size of infocube:
    Calculating size of CUBE & ODS
    regards,
    Pruthvi R

  • Partition Pruning - Dimension and FACT tables..

    Hi
    I have a DWH environment where we have partitioned the FACT table by a date column. This is RANGE partition. The TIME dimension table joins to the FACT table based on this date. However the end user queries will typically be fired using a different column in the time dimension that will hold more VIEWABLE date values (e.g.) in format MON-YYYY or YYYY-MM etc..
    The query is autogenerated by the viewer tool. The SQL has something like
    select sum(balance), MONTH from fact a, dim_time b
    where a.date = b.date and <-- this the partitioned key in fact
    b.month_year_col = 'Apr-2006' <-- Dimension filter.
    In the above case, Oracle is not doing PARTITION PRUNING. I have 24 period data and in the explain plan i can see it goes to the entire 24 periods. However if i change the query to
    select sum(balance), MONTH from fact a, dim_time b
    where a.date = b.date and <-- this the partitioned key in fact
    b.date = '31-Apr-2006' <-- Dimension filter.
    it does partition pruning. The explain plan shows that i goes to only one partition.
    Any help on this please. I would need the first query to use PARTITION PRUNING.
    Thanks
    bala

    Hi All
    Got it to work with these 3 parameters
    alter system set "_subquery_pruning_enabled" = true
    alter session set "_subquery_pruning_cost_factor"=1;
    alter session set "_subquery_pruning_reduction"=100;
    Thanks for all those who had a look into my question.
    Regards
    bala

  • Find the partition for the fact table

    Oracle version : Oracle 10.2
    I have one fact table with daily partitions.
    I am inserting some test data in this table for old date 20100101.
    I am able to insert this record in this table as below
    insert into fact_table values (20100101,123,456);
    However I observed that the partition for this date does not exist in the table (all_tab_partitions) morever I am not able to select the data using
    select * from facT_table partition(d_20100101)
    but I am able to extract the data using
    select * from facT_table where date_id=20100101
    could some one please let me know the way to find the partition in which this data might be inserted
    and if the partition for date 20100101 is not present then why insert for that date is working ?

    user507531 wrote:
    However I observed that the partition for this date does not exist in the table (all_tab_partitions) morever I am not able to select the data using
    select * from facT_table partition(d_20100101)Wrong approach.
    but I am able to extract the data using
    select * from facT_table where date_id=20100101Correct approach.
    could some one please let me know the way to find the partition in which this data might be inserted
    and if the partition for date 20100101 is not present then why insert for that date is working ?Who says that the date is invalid..? This is a range partition - which means that each partition covers a range. And if you bothered to read up in the SQL Reference Guide on how a range partition is defined, you will notice that each partition is defined with the end value of the range it covers. There is no start value - as the previous partition's end value is the "+border+" between this and the prior partition.
    I suggest that before you use a database feature you first familiarise yourself with it. Else incorrectly using it, and making the wrong assumptions about it, more than likely results.

  • Partitioning a fact table

    I am curious to hear techniques for partitioning a fact table with OWB. I know HOW to setup the partitioning for the table, but what I am curious about is what type of partitioning everyone is suggesting. Take the following example...Lets say we have a sales transaction fact table. It has dimensions of Date, Product, and Store. An immediate partitioning idea is to partition the table by month. But my curiosity arises in the method used to partition the fact table. There is no longer a true date field in the fact table to do range partitioning on. And hash partitioning will not distribute the records by month.
    One example I found was to "code" the surrogate key in the date dimension so that it was created in the following manner "YYYYMMDD". Then you could use the range partitioning based on values of the key in the fact table less than 20040200 for Jan. 2004, less than 20040300 for Feb. 2004, and so on.
    Is this a good idea?

    Jason,
    In general, obviously, query performance and scaleability benefit from partitioning. Rather than hitting the entire table upon retrieving data, you would only hit a part of the table. There are two main strategies to identify what partitioning strategy to choose:
    1) Users always query specific parts of the data (e.g. data from a particular month) in which case it makes sense for the part to be the size of the partition. If your end users often query by month or compare data on a month-by-month basis, then partitioning by month may well be the right strategy.
    2) Improve data loading speed by creating partitions. The database supports partion exchange loading, supported by Warehouse Builder as well, which enables you to swap out a temporary table and a partition at once. In general, your load frequency then decides your partitioning strategy: if you load on a daily basis, perhaps you want daily partions. Beware that for Warehouse Builder to use the partition exchange loading feature you will have to have a date field in the fact table, so you would change the time dimension.
    In general, your suggestion for the generated surrogate key would work.
    Thanks,
    Mark.

  • Copy Cube, Fact table partitioned automatically.

    Hello Guys.
    I have a trouble.
    I do a copy of one cube to make a backup of data to redisign the original cube.
    The problem is that now I see that the fact table of my new cube is partitioned into multiple tables.
    I'm on Oracle database, an I can see with program SAP_DROP_EMPTY_FPARTITIONS in se 38, that on Dev environment there is any partitions in both cubes, but in prod system, if I use the same program, I see that the original cube is not parttitioned, but the new cube (that is a copy of the original one) does have a lot of partitions in it.
    This partitions are generating that the loads from one cube to other be very extensive (for 300,000 registers) it delays around 4 hours, and it's because of the partition of the cube.
    So I want to know what thing cause that my fact table of the copy cube be parttitioned, and how can I solve this problem in order to make the loads of data more quickly.
    THANKS in advance.

    Did u tried partitioning the cube by fiscper/calmonth?
    This will surely help the system with very low number of partition.
    Cheers..
    (BTW: if you creating  a backup copy/archive of cube which doesnt require posting date in reporting, try to change posting date to end of month and hence the aggregation level with change resulting in low volume)...

  • Modeling: Is Fact Table partitioning that important?

    I read of how much partitioning a fact table can speed a query.  On an upcoming project with specifications for only 4 InfoCubes, how can we take advantage in partitioning to fact tables to speed up queries?
    Can this be factored into the design now or, we should wait till queries on Cube begins to perform poorly?
    Thanks.

    Spend some time reviewing the links others have provided and serach SDN.
    As mentioned, there are two types of partitioning generally talked about with SAP BW, logical and physical partitioning.
    <u><b>Logical partitioning</b></u> - instead of having all your data in a single cube, you might break into separate cubes, with each cube holding aspecific year's data, e.g. you could have 5 sales cubes, one for each year 2001 thru 2005.
    You would then create a Multi-Provider that allowed you to query all of them together.
    A query that needs data from all 5 years would then automatically (you can control this) be split into 5 separate queries, one against each cube, <b>running at the same time</b>.  The system automatically merges the results from the 5 queries into a single result set.
    So it's easy to see when this could be a benefit. If your queries however are primarily run just for a single year, then you don't receive the benefit of the parallel processing.  In non-Oracle DBs, splitting the data like this may still be a benefit by reducing the amount of rows in the fact table that must be read, but does not provide as much value to an Oracle DB since Infocube queries are using a Star_Transformation.
    <u><b>Physical Partitioning</b></u> - I believe only Oracle and Informix currently support Range partitioning.  This is a separately licensed option in Oracle. 
    Physical partitioning allows you to split an Infocube into smaller pieces. The pieces, or partitions, can only be created by 0FISCPER or 0CALMONTH  for an InfoCube (ODSs can be partitioned, but require a DBAs involvement).  The DB can then take advantage of this partitioning by "pruning" partitions during a query, e.g. a query only needs data form June 2005
    The DB is smart enough to restrict the indices and data it will read to the June 2005 partition. This assumes your query restricts/filters on the partitioning characteristic.  It can apply this pruning to a range of partitions as well, e.g. 0FISCPER 001/2005 thru 003/2005 would only look at the 3 partitions.
    It is <u><b>NOT</b></u>smart enough, however, to figure out that if your restrict to 0FISC<u>YEAR</u> = 2005, that it should only read 000/2005 thru 016/2005 since 0FISCYEAR is NOT the partitioning characteristic.
    An InfoCube MUST be empty in order to physically partition it.  At this time, there is no way to add additional partitions thru AWB, so you want to make sure that you create partitions out into the future for at least a of couple of years.
    If the base cube is partitioned, any aggregates that contain the partitioning characteristic (0CALMONTH or 0FISCPER) will automatically be partitioned.
    In summary, you need to figure out if you want to use physical or logical partitioning on the cube(s), or both, as they are not mutually exclusive.
    So you would need to know how the data will be queried, and the volume of data.  It would make little sense to partition cubes that will not be very large.

  • SSAS .abf file without partitions of fact table

    Hi All,
    I am trying to take a cube .abf file from Dev env. This cube contains patitions for month on one of the fact table. our each env has differnt number of months.
    My problem is when i take .abf file from DEV backup it contains only 4 months of data so it as only 4paritions. But SIT conatins 8 months data they are not showing up even after full cube process. i came to know that we must take a fresh .abf file without
    processing. How can i take it??
    Note: I cannot create partions from SSMS.
    Thanks in advance

    I am trying to take a cube .abf file from Dev env. This cube contains patitions for month on one of the fact table. our each env has differnt number of months.
    My problem is when i take .abf file from DEV backup it contains only 4 months of data so it as only 4paritions. But SIT conatins 8 months data they are not showing up even after full cube process.
    Hi shrSan,
    If we backup a cube which contains 4 partitions for the measure group, it always got 4 partitions for the measure group when we restore the cube on a new OLAP Server. If we need to filter a fact tabel with more partitions, we should create partitions by
    manually on the new OLAP Server.
    For more information, please see:
    Filtering a Fact Table for Multiple Partitions:
    http://technet.microsoft.com/en-us/library/ms175325(v=sql.105).aspx
    Partitions (Analysis Services - Multidimensional Data):
    http://technet.microsoft.com/en-us/library/ms175688.aspx
    If I have something misunderstood, please point out and elaborate your issue with more detail.
    Regards,
    Elvis Long
    TechNet Community Support

  • Regarding Fact Table Partitioning

    Hi Experts,
    I have small clarification,
    How to do Fact table partitioning?In which senario its necessary?
    Some body can explain Please with Example.
    Thanks in Advance.
    Janardhana Rao.

    And keep in mind - the main goal of partitioning the Cube is usually performance (although there are also some data administration benefits) which is achieved by partition "pruning". This "pruning" occurs only if your query has selection criteria/restrictions based on the partitioning characteristic.  What it does is restrict the amount of data that the query must examine.
    That is - if you partition your cube on 0FISCPER, your queries would want to be using 0FISCPER as a selection/restriction, e.g. you query on a single 0FISCPER, the database is smart enough to prune (exclude) all the data in the other partitions.
    If 0FSICPER or 0CALMONTH don't fit your needs, open a customer message with SAP, I saw a post that suggested that they have a procedure to permit partitioning the E fact table on some other characteristic, but I have not verified that myself (think I'll do that...)

  • Error for the fact table while processing the cube - attribute key cannot be found when processing

    Please help as I am new to SSAS and this is urgent requirement. This is a MOLAP cube and below is the error that I am receiving when processing the cube. The cube is set to Prrocess Full. Several similar errors are popped up for various dimensions.
    "Errors in the OLAP storage engine: The attribute key cannot be found when processing: Table: 'Fact_Table', Column: 'ID', Value: '1'. The attribute is 'Id'. Errors in the OLAP storage engine: The attribute key was converted to an unknown member because
    the attribute key was not found. Attribute Id of Dimension: 17 - Ves - PoC Cont from Database: DB, Cube: IPNCube, Measure Group: iSrvy, Partition: Partition1, Record: 1."
    Thanks in advance.

    Thanks for the recommendations David.
    It will be really great if you can clear some of my doubts:
    To my information, all the dimensions need to be processed first and then the fact table will be processed.
    So if the ID's are not present in the dimension tables, then it should not be present in the Fact table either.
    Here we found null values in the dimension table and the ID's were present in the Fact table. What might be the reasons causing such situation?
    Also how frequently the cube needs to be processed? Currently the ETL which processes the cube, is scheduled in a SQL Job Agent on hourly basis everyday. 
    Is there any possibilty that the cube might be under processing state and the SQL job for the next run getting executed trying to access and process the cube while it was still processing?

Maybe you are looking for

  • Problem with converting html to pdf using LiveCycle ES Java API

    I am using this code to convert html to pdf. * 1. adobe-generatepdf-client.jar * 2. adobe-livecycle-client.jar * 3. adobe-usermanager-client.jar * 4. adobe-utilities.jar * 5. wlclient.jar import java.io.File; import java.util.Properties; import com.a

  • 4.0 PDF download in Interactive Reports

    Hi, I remember reading there would be enhanced PDF support in 4.0? Does anyone know what this was? Do we still have to setup FPS/BI Publisher to be able to download a report as a PDF? In a vanilla install of 4.0, I get an error downloading a pdf, I a

  • Oracleasm cannot find asm disks on boot

    Hello! RAC 11.2.0.2 on SLES10. I have configured asmlib on a RAC system (multipath configured) and everything worked as a charm before booting node 2 in the cluster. Problem is that after reboot of node 2 i cannot see my asm disks anymore on that nod

  • Need to run boot capm again

    i got windows xp home to work on the new mace book i got but when i try to install boot camp it say that i need windows xp service pack 2 or windows vista. what do i do now, i cant get mac osx to work, do i have to reformat my mac? pleas help ryan

  • Why BIA is not stable after all these years? (old)

    Hi all, We purchased BIA (16GB blades) in 2006, implemented to our production in 2007 with Version 34. It was a success; the first month BIA was hit 18K times. Today we are seeing about 300K hit a month (we're using 32GB blades with version 49. ). I