Partitioning the fact table

Hi Gurus,
I have a question regarding partitioning the cube. When you partition the cube from the Extras menu, will it partition the F table or is it the E table or both.
Second question: After partitioning, how will i know the newly created table names.
Thanks,
ANU

Hi Anu,
Partition Need and its definition
Infocube contains huge amount of data n the table size of the cube increases regularly. so
when a query is executed on cube then it has to check entire table to get the records for
example Sales in Jan08.
Advantage of Partition
so we can partition the cube so that smaller tables are formed .so that report performance
will increase bcoz the query hits that particular partition.
Steps for Partition
1.To partiotion a cube, it must not contain data.
2. Partition can be done using time characteristics 0CALMONTH and fiscalperiod.
steps:
1. in change of the cube, select extras menu and Partioning option.
2. select calmonth time characteristic.
3. it will ask for the time period to partiotion and no. of partiotions. give them
4. activate the cube.
In BI 7 we can partition the cube even if it contains data.
select the cube, right click , select repartitioning.
1. we can delete existing partitions
2. create new ones
3. merge partitions.
Partitioning of the Cube
http://help.sap.com/saphelp_nw04s/helpdata/en/0a/cd6e3a30aac013e10000000a114084/frameset.htm
Partitioning Fields and Values
partition of Infocube
Partitioning of cube using Fiscal period
Infocube Partition
After Partition
You can find the Partition in the following tables in SE11 >
E tables /BIC/E* or /BIC/E(cube name)
Please also go through the following links
Partioning of Cube
partioning
Partioning of ODS object
/thread/733456 [original link is broken]
Hope i had answered your question
Assign points if helpful,
Thanks and regards
Bala

Similar Messages

  • Partitioning a fact table

    I am curious to hear techniques for partitioning a fact table with OWB. I know HOW to setup the partitioning for the table, but what I am curious about is what type of partitioning everyone is suggesting. Take the following example...Lets say we have a sales transaction fact table. It has dimensions of Date, Product, and Store. An immediate partitioning idea is to partition the table by month. But my curiosity arises in the method used to partition the fact table. There is no longer a true date field in the fact table to do range partitioning on. And hash partitioning will not distribute the records by month.
    One example I found was to "code" the surrogate key in the date dimension so that it was created in the following manner "YYYYMMDD". Then you could use the range partitioning based on values of the key in the fact table less than 20040200 for Jan. 2004, less than 20040300 for Feb. 2004, and so on.
    Is this a good idea?

    Jason,
    In general, obviously, query performance and scaleability benefit from partitioning. Rather than hitting the entire table upon retrieving data, you would only hit a part of the table. There are two main strategies to identify what partitioning strategy to choose:
    1) Users always query specific parts of the data (e.g. data from a particular month) in which case it makes sense for the part to be the size of the partition. If your end users often query by month or compare data on a month-by-month basis, then partitioning by month may well be the right strategy.
    2) Improve data loading speed by creating partitions. The database supports partion exchange loading, supported by Warehouse Builder as well, which enables you to swap out a temporary table and a partition at once. In general, your load frequency then decides your partitioning strategy: if you load on a daily basis, perhaps you want daily partions. Beware that for Warehouse Builder to use the partition exchange loading feature you will have to have a date field in the fact table, so you would change the time dimension.
    In general, your suggestion for the generated surrogate key would work.
    Thanks,
    Mark.

  • Error for the fact table while processing the cube - attribute key cannot be found when processing

    Please help as I am new to SSAS and this is urgent requirement. This is a MOLAP cube and below is the error that I am receiving when processing the cube. The cube is set to Prrocess Full. Several similar errors are popped up for various dimensions.
    "Errors in the OLAP storage engine: The attribute key cannot be found when processing: Table: 'Fact_Table', Column: 'ID', Value: '1'. The attribute is 'Id'. Errors in the OLAP storage engine: The attribute key was converted to an unknown member because
    the attribute key was not found. Attribute Id of Dimension: 17 - Ves - PoC Cont from Database: DB, Cube: IPNCube, Measure Group: iSrvy, Partition: Partition1, Record: 1."
    Thanks in advance.

    Thanks for the recommendations David.
    It will be really great if you can clear some of my doubts:
    To my information, all the dimensions need to be processed first and then the fact table will be processed.
    So if the ID's are not present in the dimension tables, then it should not be present in the Fact table either.
    Here we found null values in the dimension table and the ID's were present in the Fact table. What might be the reasons causing such situation?
    Also how frequently the cube needs to be processed? Currently the ETL which processes the cube, is scheduled in a SQL Job Agent on hourly basis everyday. 
    Is there any possibilty that the cube might be under processing state and the SQL job for the next run getting executed trying to access and process the cube while it was still processing?

  • SSAS .abf file without partitions of fact table

    Hi All,
    I am trying to take a cube .abf file from Dev env. This cube contains patitions for month on one of the fact table. our each env has differnt number of months.
    My problem is when i take .abf file from DEV backup it contains only 4 months of data so it as only 4paritions. But SIT conatins 8 months data they are not showing up even after full cube process. i came to know that we must take a fresh .abf file without
    processing. How can i take it??
    Note: I cannot create partions from SSMS.
    Thanks in advance

    I am trying to take a cube .abf file from Dev env. This cube contains patitions for month on one of the fact table. our each env has differnt number of months.
    My problem is when i take .abf file from DEV backup it contains only 4 months of data so it as only 4paritions. But SIT conatins 8 months data they are not showing up even after full cube process.
    Hi shrSan,
    If we backup a cube which contains 4 partitions for the measure group, it always got 4 partitions for the measure group when we restore the cube on a new OLAP Server. If we need to filter a fact tabel with more partitions, we should create partitions by
    manually on the new OLAP Server.
    For more information, please see:
    Filtering a Fact Table for Multiple Partitions:
    http://technet.microsoft.com/en-us/library/ms175325(v=sql.105).aspx
    Partitions (Analysis Services - Multidimensional Data):
    http://technet.microsoft.com/en-us/library/ms175688.aspx
    If I have something misunderstood, please point out and elaborate your issue with more detail.
    Regards,
    Elvis Long
    TechNet Community Support

  • Unposted journals still in the fact tables

    Hi!
    Sometimes not very often, but from time to time we have problem with posted journals not showing in the facttables and unposted journal not being eliminated from the fact tables.
    Is someone having the same problem or knows what might cause this problem?
    Usually it´s solved by unposting and posting the journal again, but it´s tricky to know which journals is working or not.
    Regards
    Fredrik

    Hi,
       It is difficult to find an explanation without making some investigation. In order to isolate a little bit the problem, just try to check if the journal data are real into the fact tables and in which one(WB, FAC2 or FACT). Myabe it will be just enough to process the partition to see data. In my opinion is should be related to some thinks that happen in that time into the system simultaneously.
    Hope this can help you,
    Mihaela

  • Partitioning the OKL_STRM_ELEMENTS table

    I am looking for someone with experience with partitioning the OKL_STRM_ELEMENTS table. I would like to know what you partitioned on and if you had any problems once you implemented the partitions. Besides the OKL_STRM_ELEMENTS table are there any other Lease Management table that should be partitioned.
    BLC

    This is documentation of the Partitioning the OKL_STRM_ELEMENTS table
    You should made a proposal to partition the table based on STREAM_ELEMENT_DATE Performance team however suggested creating a larger number of partitions using a smaller range (MMYYYY vs YYYY) rather than using a subpartition by hash(ID).
    Performance benefits may be seen in SQL statements that take advantage of the partioning scheme resulting in partition elimination (i.e. the sqls actually scan less data), this happens for sqls that have the partition key in the WHERE clause.
    Partitioning a table can actually degrade overall performance if the SQL statements referencing this table do not include the partition key in the WHERE clause as a filter. The performance degradation is caused by the fact that the query has to scan several partitions as opposed to a single segment in the non-partitioned case.
    The majority of select statements on OKL_STRM_ELEMENTS in OKL do include STREAM_ELEMENT_DATE in the where clause, so overall benefit should be positive.
    Moreover, ranges should be based on analysis of your data distribution

  • Content Tab: None of the fact tables are compatible with the query request

    Hi All,
    **One thing I am not clear yet of all my years with OBIEE is working with the content tab in BMM.**
    I have made a rpd the joins in physical layer as shown below:
    https://picasaweb.google.com/114804305606242416264/OBIEEError#5663056545119428530
    And the BMM layer as:
    https://picasaweb.google.com/114804305606242416264/OBIEEError#5663056519553812930
    Error I am getting when i run a request from the 3 columns from the selected 3 tables is:
    Dim - Comment Code Details
    Fact - Complaint
    Dim - Service Details
    Error Codes: OPR4ONWY:U9IM8TAC:OI2DL65P
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 14020] None of the fact tables are compatible with the query request Sr Num:[DAggr(Fact - Complaint.Sr Num by [ Dim - Service Details.Sr Cat Type Cd, Dim - Comment Code Details.Cmtcode name] )]. (HY000).
    I get no error for consistency.. I read everywhere and I know i need to set the appropriate aggregation levels in the various dims and facts LTS properties to help OBIEE understanding our model, but how to do that.. how do i decide... how should I approach, what should be the aggregation level, what details.
    When i click More button i see different options: Copy, Copy From, Get Levels, Check Level, what do these mean.
    Aggregation Content, group by - Logical Level or Column which one should i choose and how should I decide.
    Can anyone explain the Content Tab in details and from scratch with some example and why we get these errors.... I know many people who are well versed with many other things related to RPD but this. A little efforts of explaining from you guys will really be appreciated.
    Thanks in advance,
    Dev

    Hi Deepak,
    Option 1:
    My tables in physical layer are joined as below:
    D1--> F1 <--D2--> F2 <--D3
    Same way i model it in BMM
    D1--> F1 <-- D2--> F2 <--D3
    Here D1 is non Conformed Dimension for F2 and D3 is non Conformed dim for F1. Later create Dimensional hierarchies, I tried setting up the content levels
    I go Sources>content tab of Fact F1 I set
    Dimensions----------- Logical level
    D1---------------------- D1 Detail
    D2---------------------- D2 Detail
    D3---------------------- D3 Total
    then, I go Sources>content tab of Fact F2 I set
    Dimensions----------- Logical level
    D1---------------------- D1 Total
    D2---------------------- D2 Detail
    D3---------------------- D3 Detail
    Then, I also go in all the dimensions and set their content levels to Details, but it still gives me errors not sure where I am going wrong in setting the content levels.
    I need to know whether the way I have modeled it in BMM is right,
    Option 2:
    I can combine the two facts in a single Logical Fact or the above design should also work.
    (F1&F2)<--D1, D2 , D3 joined separately using complex logical joins.
    what will be the content tab details?
    Thanks,
    Dev

  • Mapping the Fact table to different levels of a dimension

    Hi,
    I have a fact table which stores the data for 4 levels of the dimensions. The aggregation method was taken care by PL/SQL and the fact table will have the data for all the 4 levels. When im trying to map all the levels to a column in the fact table using the OEM, it is generating the F KEY constraints referncing the columns mapped for the various levels of the dimension.
    The problem is that im using a denormalised table for maintaing the values of the dimension. So the columns mapped for the levels(Except for the lowest) can't have the unique key defined on it. The cube is not getting created because of the error in creating the F KEY.
    Can u please suggest how to map this fact table.
    Thnks,
    Manohar Vanama

    I am not exactly clear on your schema but I believe you are trying to map tables which are not strict star or snowflake. This means that you cannot use CWM1 (and OEM), unless you change the structure of the tables. You might be able to map the tables with CWM2. The document below will assist you:
    Oracle9i OLAP User's Guide
    Chapter 4. Designing Your Database for OLAP
    Chapter 5. Creating OLAP Catalog Metadata

  • Transaction data can be loaded into the Fact table without loading the

    Transaction data can be loaded into the Fact table without loading the corresponding master data (Example : Sales analysis transaction data can be loaded without populating any of its  dimension’s master data)
    a.     True
    b.     False

    Hi Kutti,
    True - You need to select the option in the infopackage - alwyas load even if no master data exists.
    Bye
    Dinesh

  • Essbase answers - None of the fact tables are compatible with the query request "member"

    Hi,
    I have modelled an Essbase database into the repository.
    If I pull the measure, period and year dimension in and filter on the year (member) and display the year (member) along with the period (alias) and measure it errors with =>
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 43113] Message returned from OBIS. [nQSError: 43119] Query Failed: [nQSError: 14020] None of the fact tables are compatible with the query request Fiscal Year.Fiscal Year Code. (HY000)
    However, all other things being equal if I change the year displayed to the alias then it works.
    Anyone tell me why??
    Is there a limitation that Essbase brings through that you cannot view what you filter on?
    thanks,
    Robert.

    Hi
    i have done the content level setting in each of the table, D1,F1 and F2(LTS), now i am getting the following error..
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 15018] Incorrectly defined logical table source (for fact table Gl Sets Of Books) does not contain mapping for [Code Combinations.Code Combinations.Affiliate, GL Balances.GL Balances.Currency Code, GL Balances.GL Balances.PTD_Balance, Gl Sets Of Books.Gl Sets Of Books .SoB Name]. (HY000)
    Gl Balances : D1
    Code Commbination: F1
    Gl Sets Of Books : F2
    I have checked the joins in physical and BMM layer..all are fine..

  • Error: None of the fact tables are compatible with the query request

    Hi experts,
    I have one confirm dimension D1 and other two fact table F1 and F2 (F1 and F2 are connected to D1)
    when i create a report from D1 and F1 the report is running fine. But pull the column from F2 also in this report i am getting the following error
    None of the fact tables are compatible with the query request Code Combinations
    please suggest on the same.
    Regards,
    S

    Hi
    i have done the content level setting in each of the table, D1,F1 and F2(LTS), now i am getting the following error..
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 15018] Incorrectly defined logical table source (for fact table Gl Sets Of Books) does not contain mapping for [Code Combinations.Code Combinations.Affiliate, GL Balances.GL Balances.Currency Code, GL Balances.GL Balances.PTD_Balance, Gl Sets Of Books.Gl Sets Of Books .SoB Name]. (HY000)
    Gl Balances : D1
    Code Commbination: F1
    Gl Sets Of Books : F2
    I have checked the joins in physical and BMM layer..all are fine..

  • None of the fact tables are compatible

    hi,
    am developing report from two fact table columns and one dimension table in obiee 11.1.1.5.0.
    am getting error
    Error
    View Display Error
    Odbc driver returned an error (SQLExecDirectW).
    Error Details
    Error Codes: OPR4ONWY:U9IM8TAC:OI2DL65P
    State: HY000. Code: 10058. [NQODBC] [SQL_STATE: HY000] [nQSError: 10058] A general error has occurred. [nQSError: 43113] Message returned from OBIS. [nQSError: 43119] Query Failed: [nQSError: 14020] None of the fact tables are compatible with the query request Fact - Retail.Retail. (HY000)
    SQL Issued: SELECT 0 s_0, "TM Vehicle Sales"."- Offtake Facts"."Offtake" s_1, "TM Vehicle Sales"."- Retail Facts"."Retail" s_2, "TM Vehicle Sales"."Distributor"."Country" s_3 FROM "TM Vehicle Sales"
    regards
    vcm

    need to see your design, dim is shared between the facts?
    I think you can assume the physical query based on your columns selection..
    Now pick one column from 1st fact and 2nd column from dim run a report, get physcial query and verify the joins with obiee and your own query.
    then add column from 2nd query see how it works
    Edited by: svee on Jun 29, 2012 6:21 AM

  • None of the fact tables are compatible error

    Hi All,
    I do see this error (none of the compatible fact table) after setting the content level aggregation on the dimension tables and the fact table. This error i get only when i try to pull the calculated item which is based on a attribute in the fact table. I have an attribute like year in the fact table i need to display like 'CY'||'2013' in a calculated logical column and when i pull this into answers i get this error -
    1) joins are ok ; only one fact table and 3 dimension tables
    2). content level on the fact table are specified at the detail level and also for the dimensions
    any suggestions - thanks for your time

    can anyone please provide some suggestions -
    > i looked at the fact table LTS and specified the logical level for each dimension as the detail
    > specified the LTS for each dimension table
    > I have a column in my fact table which is calendar year and i want to have a derived column like rep_cal_year with 'CY'||cal_year - so when i pull this derived column in my answers i get the error - none of the fact tables are compatibile with the query;
    what could be missign?

  • "None of the fact tables are compatible with the query request " error

    I've got a situation where I have two facts(Fact_1, Fact_2) and three dimensions(dim_1,dim_2,dim_3) in 1 subject area. I've got dimension hierarchies setup for all the dimension tables.
    Dim_1 is one to many to Fact_1
    Dim_2 is one to many to Fact_2
    Dim_3 is one to many to both Fact_1 and Fact_2
    I've set up the content levels for the LTS for the Facts so that they are the lowest grain for dimensions they join to and the grand total grain for dimensions they do not join to.
    My rpd is consistent. When I run a report using an attribute from Dim_3 and Dim_1 or Dim_3 and Dim_2, the report comes back fine.
    But if I try to run a report using all three Dim tables, I get an error and the message "None of the fact tables are compatible with the query request ".
    First of all, is it possible to make a report using all three dimensions?
    Second, what's the best way to trouble shoot this error? Why are none of the fact tables compatible? I thought as long as the aggregation levels were set to grand total for non-shared dimensions, Answers would be able to create the report properly.
    Any advise would be greatly appreciated.
    Thanks!
    -Joe

    OBIEE is looking for a fact that can link ALL the dimensions together. This is also known as the implicit fact ... you don't have a fact that can relate all the dimensions - you have 2 facts that together they can. Perhaps you need to great a single logical fact that has both LTS for your physical facts and try it that way.
    Then you'd have Dim1, Dim 2, Dim3 all being able to join to Fact1 (which is made of physical facts 1 & 2).

  • How to skip the fact table  /BI0/9AEDFC01 error  while import phase in Heterogeneous migration

    Hi.
    Please  find the below  issue of the fact table while import phase in OS/DB migration and enclosed the below log for  reference.
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: START OF LOG: 20140924185259
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: sccsid @(#) $Id: //bas/741_REL/src/R3ld/R3load/R3ldmain.c#6 $ SAP
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: version R7.40/V1.8 [UNICODE]
    Compiled Nov 23 2013 13:06:03
    -------------------- Start of patch information ------------------------
    patchinfo (patches.h): (0.009) Support for SUM/ZDM and DMO (note 1778564)
    DBSL patchinfo (patches.h): (0.011) DBSL error corrections in 7.41: (4) LOBs (note 1928526)
    --------------------- End of patch information -------------------------
    process id 14248
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: job completed
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: END OF LOG: 20140924185259
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: START OF LOG: 20140924185259
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: sccsid @(#) $Id: //bas/741_REL/src/R3ld/R3load/R3ldmain.c#6 $ SAP
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: version R7.40/V1.8 [UNICODE]
    Compiled Nov 23 2013 13:06:03
    -------------------- Start of patch information ------------------------
    patchinfo (patches.h): (0.009) Support for SUM/ZDM and DMO (note 1778564)
    DBSL patchinfo (patches.h): (0.011) DBSL error corrections in 7.41: (4) LOBs (note 1928526)
    --------------------- End of patch information -------------------------
    process id 14265
    (DB) INFO: connected to DB
    (DB) INFO: DbSlControl(DBSL_CMD_NLS_CHARACTERSET_GET): UTF16
    (GSI) INFO: dbname  = "AQ220140924040917                                                                                                              "
    (GSI) INFO: vname    = "ORACLE                          "
    (GSI) INFO: hostname = "VA1WIPRSCM03                                                    "
    (GSI) INFO: sysname  = "Linux"
    (GSI) INFO: nodename = "VA1WIPRSCM03"
    (GSI) INFO: release  = "2.6.32-358.el6.x86_64"
    (GSI) INFO: version  = "#1 SMP Tue Jan 29 11:47:41 EST 2013"
    (GSI) INFO: machine  = "x86_64"
    (SQL) INFO: Searching for SQL file SQLFiles.LST
    (SQL) INFO: SQLFiles.LST not found
    (SQL) INFO: Searching for SQL file /data1/SCMEXPORT/EXPAQ2/ABAP/DB/SQLFiles.LST
    (SQL) INFO: found /data1/SCMEXPORT/EXPAQ2/ABAP/DB/SQLFiles.LST
    (SQL) INFO: Trying to open /data1/SCMEXPORT/EXPAQ2/ABAP/DB/SQLFiles.LST
    (SQL) INFO: /data1/SCMEXPORT/EXPAQ2/ABAP/DB/SQLFiles.LST opened
    (SQL) INFO: Searching for SQL file DFACT.SQL
    (SQL) INFO: DFACT.SQL not found
    (SQL) INFO: Searching for SQL file /data1/SCMEXPORT/EXPAQ2/ABAP/DB/ORA/DFACT.SQL
    (SQL) INFO: found /data1/SCMEXPORT/EXPAQ2/ABAP/DB/ORA/DFACT.SQL
    (SQL) INFO: Trying to open /data1/SCMEXPORT/EXPAQ2/ABAP/DB/ORA/DFACT.SQL
    (SQL) INFO: /data1/SCMEXPORT/EXPAQ2/ABAP/DB/ORA/DFACT.SQL opened
    (SQL) ERROR: Invalid entry at line 5 in file /data1/SCMEXPORT/EXPAQ2/ABAP/DB/ORA/DFACT.SQL
    (SQL) ERROR: SQL list was not built successfully
    (DDL) ERROR: check_sql_list() failed for /BI0/9AEDFC01
    (DB) INFO: disconnected from DB
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: job finished with 1 error(s)
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: END OF LOG: 20140924185259
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: START OF LOG: 20140925104442
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: sccsid @(#) $Id: //bas/741_REL/src/R3ld/R3load/R3ldmain.c#6 $ SAP
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: version R7.40/V1.8 [UNICODE]
    Compiled Nov 23 2013 13:06:03
    -------------------- Start of patch information ------------------------
    patchinfo (patches.h): (0.009) Support for SUM/ZDM and DMO (note 1778564)
    DBSL patchinfo (patches.h): (0.011) DBSL error corrections in 7.41: (4) LOBs (note 1928526)
    --------------------- End of patch information -------------------------
    process id 23939
    (DB) INFO: connected to DB
    (DB) INFO: DbSlControl(DBSL_CMD_NLS_CHARACTERSET_GET): UTF16
    (GSI) INFO: dbname  = "AQ220140924040917                                                                                                              "
    (GSI) INFO: vname    = "ORACLE                          "
    (GSI) INFO: hostname = "VA1WIPRSCM03                                                    "
    (GSI) INFO: sysname  = "Linux"
    (GSI) INFO: nodename = "VA1WIPRSCM03"
    (GSI) INFO: release  = "2.6.32-358.el6.x86_64"
    (GSI) INFO: version  = "#1 SMP Tue Jan 29 11:47:41 EST 2013"
    (GSI) INFO: machine  = "x86_64"
    (SQL) INFO: Searching for SQL file SQLFiles.LST
    (SQL) INFO: SQLFiles.LST not found
    (SQL) INFO: Searching for SQL file /data1/SCMEXPORT/EXPAQ2/ABAP/DB/SQLFiles.LST
    (SQL) INFO: found /data1/SCMEXPORT/EXPAQ2/ABAP/DB/SQLFiles.LST
    (SQL) INFO: Trying to open /data1/SCMEXPORT/EXPAQ2/ABAP/DB/SQLFiles.LST
    (SQL) INFO: /data1/SCMEXPORT/EXPAQ2/ABAP/DB/SQLFiles.LST opened
    (SQL) INFO: Searching for SQL file DFACT.SQL
    (SQL) INFO: DFACT.SQL not found
    (SQL) INFO: Searching for SQL file /data1/SCMEXPORT/EXPAQ2/ABAP/DB/ORA/DFACT.SQL
    (SQL) INFO: found /data1/SCMEXPORT/EXPAQ2/ABAP/DB/ORA/DFACT.SQL
    (SQL) INFO: Trying to open /data1/SCMEXPORT/EXPAQ2/ABAP/DB/ORA/DFACT.SQL
    (SQL) INFO: /data1/SCMEXPORT/EXPAQ2/ABAP/DB/ORA/DFACT.SQL opened
    (SQL) ERROR: Invalid entry at line 5 in file /data1/SCMEXPORT/EXPAQ2/ABAP/DB/ORA/DFACT.SQL
    (SQL) ERROR: SQL list was not built successfully
    (DDL) ERROR: check_sql_list() failed for /BI0/9AEDFC01
    (IMP) INFO: a failed DROP attempt is not necessarily a problem
    (SQL) ERROR: SQL list was not built successfully
    (DDL) ERROR: check_sql_list() failed for /BI0/9AEDFC01
    (DB) INFO: disconnected from DB
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: job finished with 1 error(s)
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: END OF LOG: 20140925104442
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: START OF LOG: 20140925124401
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: sccsid @(#) $Id: //bas/741_REL/src/R3ld/R3load/R3ldmain.c#6 $ SAP
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: version R7.40/V1.8 [UNICODE]
    Compiled Nov 23 2013 13:06:03
    -------------------- Start of patch information ------------------------
    patchinfo (patches.h): (0.009) Support for SUM/ZDM and DMO (note 1778564)
    DBSL patchinfo (patches.h): (0.011) DBSL error corrections in 7.41: (4) LOBs (note 1928526)
    --------------------- End of patch information -------------------------
    process id 25323
    (DB) INFO: connected to DB
    (DB) INFO: DbSlControl(DBSL_CMD_NLS_CHARACTERSET_GET): UTF16
    (GSI) INFO: dbname  = "AQ220140924040917                                                                                                              "
    (GSI) INFO: vname    = "ORACLE                          "
    (GSI) INFO: hostname = "VA1WIPRSCM03                                                    "
    (GSI) INFO: sysname  = "Linux"
    (GSI) INFO: nodename = "VA1WIPRSCM03"
    (GSI) INFO: release  = "2.6.32-358.el6.x86_64"
    (GSI) INFO: version  = "#1 SMP Tue Jan 29 11:47:41 EST 2013"
    (GSI) INFO: machine  = "x86_64"
    (SQL) INFO: Searching for SQL file SQLFiles.LST
    (SQL) INFO: SQLFiles.LST not found
    (SQL) INFO: Searching for SQL file /data1/SCMEXPORT/EXPAQ2/ABAP/DB/SQLFiles.LST
    (SQL) INFO: found /data1/SCMEXPORT/EXPAQ2/ABAP/DB/SQLFiles.LST
    (SQL) INFO: Trying to open /data1/SCMEXPORT/EXPAQ2/ABAP/DB/SQLFiles.LST
    (SQL) INFO: /data1/SCMEXPORT/EXPAQ2/ABAP/DB/SQLFiles.LST opened
    (SQL) INFO: Searching for SQL file DFACT.SQL
    (SQL) INFO: DFACT.SQL not found
    (SQL) INFO: Searching for SQL file /data1/SCMEXPORT/EXPAQ2/ABAP/DB/ORA/DFACT.SQL
    (SQL) INFO: found /data1/SCMEXPORT/EXPAQ2/ABAP/DB/ORA/DFACT.SQL
    (SQL) INFO: Trying to open /data1/SCMEXPORT/EXPAQ2/ABAP/DB/ORA/DFACT.SQL
    (SQL) INFO: /data1/SCMEXPORT/EXPAQ2/ABAP/DB/ORA/DFACT.SQL opened
    ------------------ C-STACK ----------------------
    R3load[S](LinStackBacktrace+0x8c)[0x48d167]
    R3load[S](LinStack+0x35)[0x6ca6c5]
    R3load[S](CTrcStack2+0x48)[0x48fba1]
    R3load[S](SigIGenAction+0x212)[0x58b4fb]
    libpthread.so.0[T][0x397680f710]
    R3load[S](check_sql_list+0xab0)[0x61bbb0]
    R3load[S](DBDrop+0xbb)[0x60a8fb]
    R3load[S](import+0xde6)[0x620986]
    R3load[S](main_r3ldmain+0x1cfc)[0x6073bc]
    libc.so.6[T](__libc_start_main+0xfd)[0x397601ed5d]
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: START OF LOG: 20140925125540
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: sccsid @(#) $Id: //bas/741_REL/src/R3ld/R3load/R3ldmain.c#6 $ SAP
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: version R7.40/V1.8 [UNICODE]
    Compiled Nov 23 2013 13:06:03
    -------------------- Start of patch information ------------------------
    patchinfo (patches.h): (0.009) Support for SUM/ZDM and DMO (note 1778564)
    DBSL patchinfo (patches.h): (0.011) DBSL error corrections in 7.41: (4) LOBs (note 1928526)
    --------------------- End of patch information -------------------------
    process id 25932
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: job completed
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: END OF LOG: 20140925125540
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: START OF LOG: 20140925125540
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: sccsid @(#) $Id: //bas/741_REL/src/R3ld/R3load/R3ldmain.c#6 $ SAP
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: version R7.40/V1.8 [UNICODE]
    Compiled Nov 23 2013 13:06:03
    -------------------- Start of patch information ------------------------
    patchinfo (patches.h): (0.009) Support for SUM/ZDM and DMO (note 1778564)
    DBSL patchinfo (patches.h): (0.011) DBSL error corrections in 7.41: (4) LOBs (note 1928526)
    --------------------- End of patch information -------------------------
    process id 25955
    (DB) INFO: connected to DB
    (DB) INFO: DbSlControl(DBSL_CMD_NLS_CHARACTERSET_GET): UTF16
    (GSI) INFO: dbname  = "AQ220140924040917                                                                                                              "
    (GSI) INFO: vname    = "ORACLE                          "
    (GSI) INFO: hostname = "VA1WIPRSCM03                                                    "
    (GSI) INFO: sysname  = "Linux"
    (GSI) INFO: nodename = "VA1WIPRSCM03"
    (GSI) INFO: release  = "2.6.32-358.el6.x86_64"
    (GSI) INFO: version  = "#1 SMP Tue Jan 29 11:47:41 EST 2013"
    (GSI) INFO: machine  = "x86_64"
    (TSK) ERROR: file /tmp/sapinst_instdir/BS2013SR1/SCM703SR1/ORA/COPY/SYSTEM/STD/AS-ABAP/SAPDFACT_1.TSK.bck already seems to exist
                a previous run may not have been finished cleanly
                file /tmp/sapinst_instdir/BS2013SR1/SCM703SR1/ORA/COPY/SYSTEM/STD/AS-ABAP/SAPDFACT_1.TSK possibly corrupted
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: job finished with 1 error(s)
    /usr/sap/AQ2/SYS/exe/uc/linuxx86_64/R3load: END OF LOG: 20140925125540

    Dear Ram Nath,
    It may be late but it will be use full to others.
    I faced similar issue and solved it by commenting the 5th line of DFACT.SQL file.
    The issue was with the indexes not being generated properly to avoid such issues we must ensure to implement SAP note 1991576 in prior, if not we can comment the 5th line.
    Below is the corrected file in my case
    # ORACLE : NATIVE SQL EXPORT GENERATED AT 20150426083248
    #ind:  - commented line (earlier it was just 'ind: ')
    ind: /BI0/E0PPM_VC1~0
    tab: /BI0/F0PPM_VC1
    Please let me know in case of queries if any.
    Regards
    Baranedharan S.

Maybe you are looking for

  • Rotation reset in illustrator cs4

    hi when i select object or text  and rotate it to any angle  by selection tool, this will rotate ok nice, but how do i reset their angles to 0 degree straighten there is command in object - transform -transform again - reset bounding box what it does

  • What Version Do I have?

    How do i figure out the version of my WRT54GS? Thanks!

  • Nokia 701 - Belle FP2 - No "other" type for email ...

    After upgrading to Belle FP2 I can only set Exchange ActiveSync accounts, and only one at a time. The "other" option to set other types of email is no longer available. ¿How can I configure IMAP/POP3 accounts? Thanks, Javier

  • CHANGE NAME OF MY IPHONE IN ITUNES

    i just synched my new iphone 4 and the default name given was " iphone iphone" at my itunes..i wanted to change that name for something more logical. anyone knows how to do it? tks

  • Syslog Reports not collect Syslog.log file Messages

    I am doing a installation on CiscoWorks 3.2. after two three weeks I found my syslog services is not working properly. Once I checked on the syslog.log its updated with the device logs as normal. But when I am going to generate report it's not collec