Partitioning Fact Tables -- experiences, notes, documentation

I have gone through section 3.8 of the OBIA Installation and Configuration Guide -- "Partitioning Guidelines for Large Fact Tables".
Frankly, I find that documentation inadequate and using a poor example.
I am looking at partitioning W_GL_BALANCE_F . In this table, BALANCE_DT_WID seems to be a Partitioning Key. With 24 months data and only Month-End balances I have only 24 distinct keys. Therefore, this would be a LIST PARTITIONING Key.
I can and have rebuilt the table as a partitioned table. And am proceeding with the DAC changes as per the documentation. However, I am looking for real world implementations, documentations, notes, experiences.
Hemant K Chitale

Thanks.
Information like BUs, Companies, Ledgers etc from the source Financials systems are Dimensions when extracted. So they go into W_INT_ORG_D and W_LEDGER_D (for example) and the ROW_WIDs generated for the ORG_NAME and LEDGER_NAME is the join key to the Fact table (W_GL_BALANCE_F). So these might be partition keys but we'd have to identify the generated values (ROW_IDs becoming COMPANY_ORG_WID and LEDGER_WID) before defining the partition keys. That can be done only after the data is loaded ?.
How did you partition by BU ?
Hemant K Chitale

Similar Messages

  • Partitioning Fact Table

    Hi
    In our data warehouse we have a Fact table which has grown quite a large in size around 17 million records. We have been pondering on the options to partition this table.
    Unfortunately this fact table does not have any date columns , all columns are surrogate keys of dimensions, even for time dimensions. My idea is to partition by range by manually specifying the surrogate key ranges of time dimension. Is that the way one proced with?
    Other option is to, add a new column to the Fact Table and populate with the corresponding "ddmmyyyy" in number format and use that one to partition the table. If I go with this approach, if in my reports / queries if I use the dimension key to join between the Fact and Dimension, will oracle still identify which partition to look for? Or do I MUST use the partitioned column in the queries for partition pruning to be effective.
    Any thoughts would be useful.
    Regards
    Mahesh

    if in my reports / queries if I use the dimension key to join between the Fact and Dimension, will oracle still identify which partition to look for? No the oracle will not use the Partition Prunning in this case.
    Or do I MUST use the partitioned column in the queries for partition pruning to be effective.yes , you need to use the partitioned column for partition pruning to be effective.
    Cheers
    Nawneet

  • PL/SQL- Problem in creating a partitioned fact table using select as syntax

    Hi All,
    I am trying to create a clone(mdccma.fact_pax_bkng_t) of existing fact table (mdccma.fact_pax_bkng) using dynamic pl/sql. However, pl/sql anonymous block errors out with following error:
    SQL> Connected.
    SQL> SQL> DECLARE
    ERROR at line 1:
    ORA-00911: invalid character
    ORA-06512: at "SYS.DBMS_SYS_SQL", line 1608
    ORA-06512: at "SYS.DBMS_SQL", line 33
    ORA-06512: at line 50
    Here is pl/sql block:
    -- CREATING FPB_T
    DECLARE
    v_owner VARCHAR2(32) := 'MDCCMA';
    v_table_original VARCHAR2(32) := 'FACT_PAX_BKNG';
    v_table VARCHAR2(32) := 'FACT_PAX_BKNG_T';
    v_tblspc VARCHAR2(32) := v_owner||'_DATA';
    CURSOR c_parts IS SELECT TABLESPACE_NAME, PARTITION_NAME,HIGH_VALUE, ROW_NUMBER() OVER (ORDER BY PARTITION_NAME) AS ROWNUMBER
    FROM USER_TAB_PARTITIONS
    WHERE TABLE_NAME = v_table_original
    ORDER BY PARTITION_NAME;
    v_cmd CLOB := EMPTY_CLOB();
    v_cmd3 varchar2(300) := 'CREATE TABLE ' ||v_owner||'.'||v_table||' TABLESPACE '||v_tblspc
    ||' NOLOGGING PARTITION BY RANGE'||'(' ||'SNAPSHOT_DTM '||')' ||'(';
    v_part VARCHAR2(32);
    v_tblspc_name VARCHAR2(32);
    v_row number;
    v_value LONG;
    v_tmp varchar2(20000);
    v_cur INTEGER;
    v_ret NUMBER;
    v_sql DBMS_SQL.VARCHAR2S;
    v_upperbound NUMBER;
    BEGIN
    v_cmd := v_cmd3;
    OPEN c_parts;
    FETCH c_parts INTO v_tblspc_name, v_part,v_value, v_row;
    WHILE c_parts%FOUND
    LOOP
    IF (v_row = 1) THEN
    v_tmp := ' PARTITION '||v_part||' VALUES LESS THAN ' ||'('|| v_value||')'||' NOLOGGING TABLESPACE '||v_tblspc_name;
    ELSE
    v_tmp := ', PARTITION '||v_part||' VALUES LESS THAN ' ||'('|| v_value||')'||' NOLOGGING TABLESPACE '||v_tblspc_name;
    END IF;
    v_cmd := v_cmd || v_tmp;
    -- DBMS_OUTPUT.PUT_LINE(v_cmd);
    FETCH c_parts INTO v_tblspc_name, v_part,v_value, v_row;
    END LOOP;
    -- DBMS_OUTPUT.PUT_LINE('Length:'||DBMS_LOB.GETLENGTH(v_cmd));
    v_cmd := v_cmd||')'||' AS SELECT ' || '*'||' FROM ' || v_owner||'.'|| v_table_original ||' WHERE '||'1'||'='||'2'||';';
    v_upperbound := CEIL(DBMS_LOB.GETLENGTH(v_cmd)/256);
    FOR i IN 1..v_upperbound
    LOOP
    v_sql(i) := DBMS_LOB.SUBSTR(v_cmd
    ,256 -- amount
    ,((i-1)*256)+1 -- offset
    END LOOP;
    v_cur := DBMS_SQL.OPEN_CURSOR;
    DBMS_SQL.PARSE(v_cur, v_sql, 1, v_upperbound, FALSE, DBMS_SQL.NATIVE);
    v_ret := DBMS_SQL.EXECUTE(v_cur);
    CLOSE c_parts;
    DBMS_OUTPUT.PUT_LINE(v_cmd);
    -- EXECUTE IMMEDIATE v_cmd ;
    END;
    The above pl/sql creates a DDL for partitioned fact table(new) based on an existing fact table and get executes through CLOB.
    Please look into the issue and let me know any changes or modifications/suggestions that are required to fix the issue. Any help is appreciated.
    Thank You,
    Sudheer

    Think this is your problem:
    v_cmd := v_cmd||')'||' AS SELECT ' || '*'||' FROM ' || v_owner||'.'|| v_table_original ||' WHERE '||'1'||'='||'2'||';';Remove the SQL terminator ';' ... dynamic SQL doesn't require it, try this instead:
    v_cmd := v_cmd||')'||' AS SELECT ' || '*'||' FROM ' || v_owner||'.'|| v_table_original ||' WHERE '||'1'||'='||'2';Thanks
    Paul

  • Should I partition a table or not???

    I have a table with well over 8 million records and growing around 15,000 records daily. It is queried through a form (forms6i), no updates are done on the form just "select". The indexes are set up for the 2 types of querries mostly performed- by a date criteria and an id criteria. Is indexing good enough? I notice that it is getting harder to get results from date specific querries. What is the best approaches to ensure the table is optimized for querries? I am not used to working with such a large table size. Does partitioning help indexes work more efficient? Should I run statistics on the whole table?? I am not comfortable running statistics on the table when I am not around to monitor the performance on the server. The database runs 24 hours, 7 days a week- there is no real "downtime" to run stats.
    Please advise, I would like to find a best practice approach.

    Re: Should I partition a table or not???
    Posted: Jun 19, 2007 11:59 AM in response to: user542952 Reply
    (small correction in my earlier posting)
    mostly performed- by a date criteria and an id criteriaPartitioning NOT always imporves performance. It can degrade performance if you had NOT partitoned based on queries those hit.
    Beaware of local indexes. You might degrade performance because you may need to proble mulitple index partitions.
    Do you query date and Ids by range search or exact search ?.
    How much data (roughly 10% ,20% , 1% etc..) you fetch through the query

  • Experiences of Partitioning FACT tables

    Running BPC 7.0 SP3 for MS
    We have two very large FACT tables (195milliion records and 105million records) and these are currently growing at a rate of 2m/5m records per month - we are running an incremental optimize twice per day
    It has been suggested that we consider partioning the tables to improve performance, but I have not been able to find any users/customers with any experience of doing this
    Specifically
    1. Does it improve performance?
    2. What additional complexity does it add to regular maintenance ?
    3. Have there been any problems encountered implementing Partioned tables?
    4. It would seem that partioning based on time would make sense - historic data in one partition, current data in another HOWEVER many of our reports pull current year and prior year so will this cause a reporting issue? Or degrade report performance?

    I don't know if this is still an issue for you.  You ask about Fact Table partitioning specifically, but you need to be aware that it is possible to partition either the FACT tables or the Fact table partition of the cube, or both. We have used (further) partioning of Fact table partition in the cube with success, and it sounds as if this is what you are really asking about. 
    The impacts are on
    1. processing time, a full optimize without Compress only processes the paritions that have changed, thereby reducing the run time where there is a lot of unchanged data. You mention that you run incremental updates twice daily,  this is currently reprocessing the whole database.  I would have expected the lite optimize to be more effective, supported by an overnight full optimize, if you have an overnight window. You can also run the lite optimize more frequently.
    2. query time. The filters defined in the partitions provide a more efficient path to data in the reporting processes than the defaults, which have the potential to scan large parts of the database.
    Partitioning is not a panacea. You need to be specific about the areas of performance problem that you have and choose the performance improvement strategy to address these.  Looking at the indexing of the database is also an area where you can improve performance significantly.
    If you partition the cube, it is transparent to the usage of the application, from both user and admin perspective. The greatest complexity comes is the definition of the partitions in the first place, but this is a normal DBA function.  The trick is ensure that the filter statements do not overlap, otherwise you might get a value duplicated in 2 partitions, and to define a catchall partition to include anything not included in specific partitions. You should expect to revist the partitioning from time to time.  It is quite straightforward to repartition, you are not doing anything to the underlying data in the FACT tables
    Time is a common dimension to partition and you may partition at different levels of granularity for different periods, e.g. current year by qtr or month, prior and future years by year.  This reflects where the most frequent updates will be.  It is also possible to define partitions based on combinations of dimensions, we use category and time, so that currenct year actuals has the most granular partitions and all historic years budgets go into a single partition.

  • SSAS Tabular - Vertically partitioned Fact Table Relationships

    Hi All, I have a very wide (300 column) fact table and hence it is partitioned into 2 fact tables vertically with the common key in both tables for relation (1:1). Both the tables have some measures and degenerate dimensions to be used for analyisis.
    How should the data model look like? I am stuck because the bi-directional relationships doesnt seem to be supported and only one direction works when degenerate dimensions are used.
    Also, these two fact tables are related to some common dimensions and when i connect all of them together, some of the relationships get deactivated due to circular naure of relationships. Using USERELATIONSHIPS is going to be very timeconsuming considering
    the number of measures.
    What is the guidance around modelling vertically partitoned fact tables? Do I combine both the tables into one wide table using a View or Query? Any help appreciated.
    Thanks, Ashish Singh

    Hi Thomas, Unfortunately I do not have any control over the DW design and the client says that they require most of the columns for analysis. I do not want to combine the fact tables into a wider table but stuck with the relationship design in Tabular mode.
    1) Bi-directional relationships are not supported. So, if both the fact tables have degenerate dimensions on which the mesaures have to be analyzed, it cannot work because the relationships are unidirectional in tabular mode.
    2) Relationships get deactivated. Since both the tables are connected to each other using a KEY column, any attempt to connect both the tables to some common dimension results in INACTIVE relationships which needs either using USERELATIONSHIP or importing
    the same table again with a different alias which complicates the model.
    3) Inferenced relationships cant be modelled. If Fact1 is connected to a dimension, the measures of Fact2 cannot be analyzed on that dimension even though Fact1 and Fact 2 are connected using a 1:1 relationship.
    All these design challenges make me lean towards using a SQL View and joining the 2 Fact tables in the Tabular model so that I could provide the users with the analysis they require using the current DB structure. Is there a different approach to achieve?
    Any better ideas would be heplful.
    Thanks, Ashish Singh

  • OBIEE 11g: Fact table does not have  a properly defined primary key

    Hi,
    We have two fact tables and two dimension tables.
    We have joined the tables like
    D1-->F1
    D2-->F2
    D1-->D2
    We dont have any hieracies.
    It is throwing error in consistency check,
    [nQSError: 15001] Could not load navigation space for subject area ABC.
    [nQSError: 15033] Logical table Fact1 does not have a properly defined primary key.
    It is not like STAR Schema, its like snowflake schema. How to define primary key for fact table.
    Thanks.

    Hi,
    My suggestion would be bring both the facts to the same logical table sources and have a single fact table in the BMM layer joined with multiple dimensions.
    Build a dimension hierarchy for the dimensions and then in the content logical layer mapping, map the dimensions to the fact tables with detailed level/Total
    Refer the below link-
    http://108obiee.blogspot.com/2009/08/joining-two-fact-tables-with-different.html
    Hope this help's
    Thanks,
    Satya

  • Fact-tables not resulting in any data

    hello experts;
    I am trying to create a mock-up report in OBI Answers and somehow Im having a problem with metric columns. When I pick any metric column then I get the: *(No Results: The specified criteria didn’t result in any data)*
    But when I pick just the fact tables, I see the results. I have doubled checked the rpd; the schemas btn the dimension and fact tables looks ok to me. I have tried union with another subject area, I get the same error. Anyone with an idea why fact tables are not collaborating with dimension tables?
    will appreciate your help

    Hi,
    Can you try to check the physical SQL generated by the reports? and see if that query is returning result in the DB or not.
    I am sure this will help you to find the root cause.
    Regards,
    Kashi

  • Fact Tables not dispalying

    Hi
    The Fact tables are not displaying on my excel sheet even though they are in the Power Pivot data.
    Thanks

    if all collumns in your fact table are of type 'number' (which is probably the case) and you did not define any measures for the fact table, then the fact table will indeed not show up in excel.
    Create a measure for the fact table (something like total amount:=calculate(sum('fact table'[amount]))) and the fact table (with the measure) will show up in excel.

  • Physical query not accessing Fact Table/Column

    Hi,
    I'm facing a weired issue. I have a report with 32 columns. 22 columns are from 8 dimensions and 10 columns from 3 Facts.
    But while observing the Physical query. Its not touching 1 particular Column coming from 1 Fact Table.
    So, in the query only 2 Fact columns are there and in the report its always Null value.
    When I'm adding a filter on that Fact Column to force Analytics to go and join with that Fact Table, Its not including that Fact Table/Column and producing a very funny query like the following:
    SELECT
    COLUMNS....
    FROM
    TABLES (8 DIMENSION AND 2 FACTS)
    GROUP BY COLUMNS....
    HAVING cast(NULL as DOUBLE PRECISION )>1234
    ================================
    "cast(NULL as DOUBLE PRECISION )>1234" - this is the filter, I have added on that Fact Column.
    Note- It may be because of RPD configurations/joins. But the system (Siebel Analytics 7.8) is in production for 2 years and 500 users using it and nobody reported any issue like this before.
    Can anyone please give me some helpful hint, idea, what is happening?
    Thanks in Advance
    Regards
    Sudipta

    Hi Kishore,
    Thanks for your reply.
    However, I did the analysis and found following, please let me know if your perspective is falling in the same line or not.
    Scenario:
    Suppose all the columns i'm trying to fetch, are coming from 5 dimensions [d1,d2,d3,d4,d5] (some of them are aliased table) and 3 fact tables [f1,f2,f3]
    Now, the report is not fetching data for c1 column which is coming from f3 fact table. Even f3 table is not included in the physical query.
    Now what i found that, there is a dimension suppose d3 which is not having any direct join with f3, but other facts like f1,f2 have joins with d3 table.
    If i remove the columns coming from d3 dimension from report, then f3 fact columns are getting populated in report.
    Problem:
    I need to provide a single physical query for that report, while those report columns are defined by the business.
    Questions:
    1. So is it really possible to create a query for that kind of report without changing the RPD design?
    2. If no, then is the only solution to split that report (i'm thinking) into 2-3 sub reports?
    Thanks all of you for replying and providing your thoughts.
    Regards
    Sudipta

  • Unexpected results getting data from two fact tables through conformed dim

    Hi all,
    We are getting an unexpected behaviour in our OBIEE 10.1.3.3.3. We have this scenario:
    We have {color:#0000ff}2 fact tables{color}{color:#000000} called F1 and F2. F1 has one measure, f1m1 and F2 has another one, f2m1.
    We have {color:#0000ff}4 conformed dimensions{color}, called D1, D2, D3, Date.
    When we are requesting for individual fact tables, we are getting:
    date d1 d2 d3 f1m1
    dt1 - x - y - z - m1
    dt1 - x - y - z' - m2
    date d1 d2 d3 f2m1
    dt1 - x - y - z - m3
    dt1 - x - y - z'' - m4
    But, trying to obtain a compare scenario, we are getting
    date d1 d2 d3 f1m1 f2m1
    dt1 x y z m1 m4
    Instead of
    date d1 d2 d3 f1m1 f2m1
    dt1 x y z m1 m3
    Looking at query log, we have catched the reason. That's why BI Server is using to solve this request using ROW_COUNT() to join SAWITH0 and SAWITH1 in SAWITH2 result set. So, the order may not be the same in the results sets in every fact table. More or less, generated query is like:
    WITH
    SAWITH0 AS
    (select ....
    from F1),
    SAWITH1 AS
    (select ...
    from F2),
    SAWITH2 AS
    select from (select ...
    ROW_NUMBER() OVER PARTITION (....) c10
    from SAWITH0.d1 full outer join SAWITH1.d1 ....) D1
    {color:#ff0000}where (D1.c10 = 1){color}
    select SAWITH2. ....
    from SAWITH2
    order by c1..c10
    The problems seems to be that BI server is ordering the result sets SAWITH0 and SAWITH1 and getting row number to join this results sets, but this is not getting the correct result.
    Any ideas?
    TIA
    Javier
    {color}
    Edited by: jirazazábal on Mar 13, 2009 2:46 PM

    I have done a logical fact table with two fact table source on it.
    The Sql performed against the database was this one.
    -------------------- Sending query to database named PRODS_AIX (id: <<153418>>):
    WITH
    SAWITH0 AS (select sum(T21296.CONSUMERS_SALES_EURO) as c1,
         T21309.DIVISION_CODE as c2
    from
         DIVISION T21309,
         C_CONSUMERS_SALES T21296
    where  ( T21296.DIVISION = T21309.DIMENSION_KEY )
    group by T21309.DIVISION_CODE),
    SAWITH1 AS (select sum(T21356.ORDER_VALUE) as c1,
         T21309.DIVISION_CODE as c2
    from
         DIVISION T21309,
         DWH_SALES_ORDER_OVERVIEW T21356
    where  ( T21309.DIMENSION_KEY = T21356.DIVISION_KEY )
    group by T21309.DIVISION_CODE)
    select distinct case  when SAWITH0.c2 is not null then SAWITH0.c2 when SAWITH1.c2 is not null then SAWITH1.c2 end  as c1,
         SAWITH0.c1 as c2,
         SAWITH1.c1 as c3
    from
         SAWITH0 full outer join SAWITH1 On nvl(SAWITH0.c2 , 'q') = nvl(SAWITH1.c2 , 'q') and nvl(SAWITH0.c2 , 'z') = nvl(SAWITH1.c2 , 'z')
    order by c1As you can see one select (SAWITH0) for the first fact table C_CONSUMERS_SALES and one select for the second fact table DWH_SALES_ORDER_OVERVIEW (SAWITH1 ) and the two statement are joined with a full outer join.
    I ask me why you have the three select (SAWITH0,SAWITH1 and SAWITH2). Can you please paste the complete SQL performed ?
    Can you tell us also which SQL is performed if you select only the columns from one fact table and not for the other ?
    Regards
    Nico
    http://gerardnico.com

  • Adding a cloumn in a fact table which has pctfree 0

    Hi,
    I have a partitioned fact table with millions of rows and pctfree 0. Now I want to add a new column in this table (without any default value in the new column, so new column is a null). How will it affect (because of pctfree 0) the performance of old data retrieval for reports.
    Thanks,
    Ravi

    Hello,
    If this is Berkeley DB what is the platform and version? If not this is not the
    right forum. You might try the SQL and PL/SQL forum at:
    PL/SQL
    Thank you,
    Sandra

  • Dimension Table Larger Than Fact Table

    Hi,
    I need a solution to deal with a situation in my project. The master data for the Business Partner is close to 20 Mil and growing. The overall records in the fact table is only 1.5 to 2 Mil.
    I am looking for suggestions that can help me in designing a optimal data model which does not hinder the reporting performance. Reporting with 20 Mil records in Dimension and 10 times less data in the fact table is not common occurance. I guess this is a scenario that the retail Industries would have experinced due to the huge customer base.
    P.S: Segregation of the dimension is something that will not help. And its currently a line item dimension. So any thoughts apart from these will be highly appreciative.
    Thanks!
    Sajan R

    Hi Sajan Rajagopal ,
    I think u need to just go through the dimensions u had created so trat u can delete unnecessary infoobjects assigned to tat dimension and can build a dimension whcih has only those of master data.
    For Ex...for the material master when we checked we had huge amount of data more than ur's so we had made diffenet dimenons for material and batch and then we had tried to load the data for tat dimensions and it was working fine and also data loads also getting loaded fine..
    Please check ur dimensions once so tat required ones are defined correctly in the dimensions

  • Fact tables are resulting in error message

    hello experts;
    I am trying to create a mock-up report in OBI Answers and somehow Im having a problem with metric columns. When I pick any metric column then I get the: *(No Results: The specified criteria didn’t result in any data)*
    I have doubled checked the rpd; the schemas btn the dimension and fact tables looks ok to me. I have tried union with another subject area, I get the same error. Anyone with an idea why fact tables are not collaborating with dimension tables?
    will appreciate your help

    In case if you are not able to see the logs,
    follow this
    Saving SET VARIABLE LOGLEVEL=2,DISABLE_CACHE_HIT=1; in a report
    Mark correct or helpful if it helps,
    Regards,
    Veeresh Rayan

  • ROW_WID in fact tables.

    Hi,
    I notice that there are some fact tables with ROW_WID column. Some of the fact tables do not have this column. What is the purpose of having the ROW_WID in fact tables? Does anyone have an idea?
    I am using OBIA 7.9.5.
    Thanks.
    Manoj.

    Hi Christian,
    The query
    SELECT table_name
    FROM user_tables
    WHERE table_name LIKE '%_F'
    MINUS
    SELECT table_name
    FROM user_tab_columns
    WHERE table_name IN (SELECT table_name
    FROM user_tables
    WHERE table_name LIKE '%_F')
    AND column_name = 'ROW_WID';
    returned the following tables.
    W_ACCT_BUDGET_F
    W_ACD_EVENT_F
    W_AP_BALANCE_F
    W_AP_XACT_F
    W_AR_BALANCE_F
    W_AR_XACT_F
    W_BOM_ITEM_F
    W_CNTCT_CNTR_BNCHMRK_TGT_F
    W_CNTCT_CNTR_PERF_F
    W_CUSTOMER_COST_LINE_F
    W_CUSTOMER_STATUS_HIST_F
    W_EMPLOYEE_DAILY_SNP_F
    W_EMPLOYEE_EVENT_F
    W_EMPLOYEE_MONTHLY_SNP_F
    W_EXPENSE_F
    W_GL_BALANCE_F
    W_GL_COGS_F
    W_GL_OTHER_F
    W_GL_REVN_F
    W_INVENTORY_DAILY_BAL_F
    W_INVENTORY_MONTHLY_BAL_F
    W_IVR_NAV_HIST_F
    W_PRODUCT_COST_LINE_F
    W_PRODUCT_XACT_F
    W_PURCH_COST_F
    W_PURCH_CYCLE_LINE_F
    W_PURCH_ORDER_F
    W_PURCH_RCPT_F
    W_PURCH_RQSTN_LINE_F
    W_PURCH_RQSTN_STATUS_F
    W_PURCH_SCHEDULE_LINE_F
    W_REP_ACTIVITY_F
    W_RQSTN_LINE_COST_F
    W_SALES_BACKLOG_HISTORY_F
    W_SALES_BACKLOG_LINE_F
    W_SALES_BOOKING_LINE_F
    W_SALES_CYCLE_LINE_F
    W_SALES_INVOICE_LINE_F
    W_SALES_PICK_LINE_F
    W_TAX_XACT_F
    Regards,
    Manoj.

Maybe you are looking for

  • Suggestion for a community package

    I didn't find the way to sumit bug for community packages, so I put it here. Sorry if I came to the wrong place. The package gtk-kde4 should add provides=gtk-qt-engine, so that it can replace gtk-qt-engine if you want (many packages depends on gtk-qt

  • For this sample data how to fulfill my requirement ?

    For this sample data how to fulfill my requirement ? with temp as select 'MON' WEEKDAY,'9-10' TIMING,'I' CLASS FROM DUAL UNION select 'MON' WEEKDAY,'9-10' TIMING,'II' CLASS FROM DUAL UNION select 'MON' WEEKDAY,'9-10' TIMING,'III' CLASS FROM DUAL UNIO

  • How can forbid changing data when using BAPI  'BAPI_MATERIAL_SAVEREPLICA'?

    I am using BAPI  'BAPI_MATERIAL_SAVEREPLICA' to creat material master data in batch. But this BAPI also can be used for change mode. How to forbid the change of MAT data when use this BAPI? TKS a lot~~ I am looking foward to your response~~~

  • Resizing images published through flickr, but I don't want it to.

    I set up a flickr connection with LR3, very cool. However it resized the image to 1024x768, though I have nothing set under 'resize'. Why is it doing that?  Do I have to set it to resize everything to 9000 pix and not enlarge for it to keep the origi

  • My service is suspended. I would like to pay my due. However I have a foreign adress

    Hi, my service is suspended. I would like to pay my due. Howeover, when I use https://ebillpay.verizonwireless.com/cws/viewLoginPage.do to pay my bill with my credit card it doesn't work because I have a foreign adress and the system doesn't recogniz