Increase performance query more than 10 millions records significantly

The story is :
Everyday, there is more than 10 million records which the data in textfiles format (.csv(comma separated value) extension, or other else).
Example textfiles name is transaction.csv
Phone_Number
6281381789999
658889999888
618887897
etc .. more than 10 million rows
From transaction.csv then split to 3 RAM (memory) tables :
1st. table nation (nation_id, nation_desc)
2nd. table operator(operator_id, operator_desc)
3rd. table area(area_id, area_desc)
Then query this 3 RAM tables to result physical EXT_TRANSACTION (in harddisk)
Given physical External Oracle table name EXT_TRANSACTION with column result is :
Phone_Number Nation_Desc Operator_Desc Area_Desc
======================================
6281381789999 INA SMP SBY
So : Textfiles (transaction.csv) --> RAM tables --> Oracle tables (EXT_TRANSACTION)
The first 2 digits is nation_id, next 4 digits is operator_id, and next 2 digits is area_id.
I ever heard, to increase performance significantly, there is a technique to create table in memory (RAM) and not in harddisk.
Any advice would be very appreciate.
Thanks.

Oracle uses sophisticated algorithms for various memory caches, including buffering data in memory. It is described in Oracle® Database Concepts.
You can tell Oracle via the CACHE table clause to keep blocks for that table in the buffer cache (refer to the URL for the technical details of how this is done).
However, this means there are now less of the buffer cache available to cache other data often used. So this approach could make accessing one table a bit faster at the expense of making access to other tables slower.
This is a balancing act - how much can one "interfere" with cache before affecting and downgrading performance. Oracle also recommends that this type of "forced" caching is use for small lookup tables. It is not a good idea to use this on large tables.
As for your problem - why do you assume that keeping data in memory will make processing faster? That is a very limited approach. Memory is a resource that is in high demand. It is a very finite resource. It needs to be carefully spend to get the best and optimal performance.
The buffer cache is designed to cache "hot" (often accessed) data blocks. So in all likelihood, telling Oracle to cache a table you use a lot is not going to make it faster. Oracle is already caching the hot data blocks as best possible.
You also need to consider what the actual performance problem is. If your process needs to crunch tons of data, it is going to be slow. Throwing more memory will be treating the symptom - not the actual problem that tons of data are being processed.
So you need to define the actual problem. Perhaps it is not slow I/O - there could be a user defined PL/SQL function used as part of the ELT process that causes the problem. Parallel processing could be use to do more I/O at the same time (assuming the I/O subsystem has the capacity). The process can perhaps be designed better - and instead of multiple passes through a data set, crunching the same data (but different columns) again and again, do it in a single pass.
10 million rows are nothing ito what Oracle can process on even a small server today. I have dual CPU AMD servers doing over 2,000 inserts per second in a single process. A Perl program making up to a 1,000 PL/SQL procedure calls per second. Oracle is extremely capable - as it today's hardware and software. But that needs a sound software engineering approach. And that approach says that we first need to fully understand the problem before we can solve it, treating the cause and not the symptom.

Similar Messages

  • Handling internal table with more than 1 million record

    Hi All,
    We are facing dump for storage parameters wrongly set.
    Basically the dump is due to the internal table having more than 1 million records. We have increased the storage parameter size from 512 to 2048 , still the dump is happening.
    Please advice is there any other way to handle these kinds of internal table.
    P:S we have tried the option of using hashed table, this does not suits our scenario.
    Thanks and Regards,
    Vijay

    Hi
    your problem can be solved by populating the internal table in chunks. for that you have to use Database Cursor concept.
    hope this code helps.
    G_PACKAGE_SIZE = 50000.
      * Using DB Cursor to fetch data in batch.
      OPEN CURSOR WITH HOLD DB_CURSOR FOR
             SELECT *
               FROM ZTABLE.
        DO.
        FETCH NEXT CURSOR DB_CURSOR
           INTO CORRESPONDING FIELDS OF TABLE IT_ZTABLE
           PACKAGE SIZE G_PACKAGE_SIZE.
        IF SY-SUBRC NE 0.
          CLOSE CURSOR DB_CURSOR.
          EXIT.
        ENDIF.

  • How to update more than 5 million records without error message ORA-00257:

    Hi ,
    I need to update some columns in my table which is contains about 5 million records
    I 've already tried this
    Update AAA_CDR
    Set RoamFload = Null ;
    but the problem is I've got the error message ("ORA-00257: archiver error. Connect internal only,until freed.) and the update consuming about 6 hours with no results ,
    then I do the commands ( Alter system set db_recovery_file_dest_size=50G) and the problem solved .
    but I need to update about 15 columns of this table to be null ,what I should do to overcome this message and update the table in reasonable time
    Please Help Me ,

    The best way would be to allocate sufficient disk space for your archive log destination. Your database is not sized properly. NOLOGGING option will not do much for you because it' only applies to direct load operations when the data inserted into nologging table is selected from another table. UPDATE will be be logged, regardless of the NOLOGGING status. Here is the quote from the manual:
    <quote>
    LOGGING|NOLOGGING
    LOGGING|NOLOGGING specifies that subsequent Direct Loader (SQL*Loader) and direct-load
    INSERT operations against a nonpartitioned index, a range or hash index partition, or
    all partitions or subpartitions of a composite-partitioned index will be logged (LOGGING)
    or not logged (NOLOGGING) in the redo log file.
    In NOLOGGING mode, data is modified with minimal logging (to mark new extents invalid
    and to record dictionary changes). When applied during media recovery, the extent
    invalidation records mark a range of blocks as logically corrupt, because the redo data
    is not logged. Therefore, if you cannot afford to lose this index, you must take a backup
    after the operation in NOLOGGING mode.
    If the database is run in ARCHIVELOG mode, media recovery from a backup taken before an
    operation in LOGGING mode will re-create the index. However, media recovery from a backup
    taken before an operation in NOLOGGING mode will not re-create the index.
    An index segment can have logging attributes different from those of the base table and
    different from those of other index segments for the same base table.
    </quote>
    If you are really desperate, you can try the following undocumented/unsupported command:
    ALTER DATABASE ARCHIVELOG COMPRESS ENABLE;
    That will cause database to compress your archive logs and consume less space. This command is not documented or supported, not even in the version 11.2.0.3 and causes the database to start spewing ORA-0600 in version 10G. DO NOT USE IN A PRODUCTION ENVIRONMENT!!!!

  • Updating table with more than 50 million records

    Hi Team,
    Updating three columns in a table which has 60 millions of records is taking a lot of time. How to improve the performance of the query. The table has to be updated based on the values from different database tables.
    Thanks,
    Eshwar.
    Please don't forget to Marked as Answer if my post solved your problem and use Vote As Helpful if a post was useful. It will helpful to other users.

    Split them into small batches
    when you are using multiple tables, have them with JOINs
    have the corresponding indexes for joining columns
    disable triggers if any (if possible and doesnt affect your logic - wud be least favourable step ..)
    also these shud help:
    http://blogs.msdn.com/b/repltalk/archive/2011/10/10/lessons-learned-updating-100-millions-rows.aspx
    http://www.sqlservergeeks.com/blogs/AhmadOsama/personal/450/sql-server-optimizing-update-queries-for-large-data-volumes
    http://stackoverflow.com/questions/7344984/update-large-number-of-rows-sql-server-2005
    Thanks,
    Jay
    <If the post was helpful mark as 'Helpful' and if the post answered your query, mark as 'Answered'>

  • Error MDX result contains too many cells (more than 1 million). (WIS 10901)

    Hi,
    We have developed an universe on BI query and developed report on it. But while running this BO query in Web Intelligence we get the following error
    A database error occured. The database error text is: Error in MDDataSetBW.GetCellData.  MDX result contains too many cells (more than 1 million). (WIS 10901)
    This BO query is restricted for one document number.
    Now when i check in the BI cube there are not more than 300-400 records for that document number.
    If i restrict the BO query for document number, delivery number, material and acknowledged date then the query runs successfully.
    Can anyone please help with this issue.

    follow this article to get the mdx generated by the webi report.
    http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/90b02218-d909-2e10-1988-a2ca74547900
    then try to execute the same in mdxtest transaction in bw

  • Analyse a partitioned table with more than 50 million rows

    Hi,
    I have a partitioned table with more than 50 million rows. The last analyse is on 1/25/2007. Do I need to analyse him? (query runs on this table is very slow).
    If I need to analyse him, what is the best way? Use DBMS_STATS and schedule a job?
    Thanks

    A partitioned table has global statistics as well as partition (and subpartition if the table is subpartitioned) statistics. My guess is that you mean to say that the last time that global statistics were gathered was in 2007. Is that guess accurate? Are the partition-level statistics more recent?
    Do any of your queries actually use global statistics? Or would you expect that every query involving this table would specify one or more values for the partitioning key and thus force partition pruning to take place? If all your queries are doing partition pruning, global statistics are irrelevant, so it doesn't matter how old and out of date they are.
    Are you seeing any performance problems that are potentially attributable to stale statistics on this table? If you're not seeing any performance problems, leaving the statistics well enough alone may be the most prudent course of action. Gathering statistics would only have the potential to change query plans. And since the cost of a query plan regressing is orders of magnitude greater than the benefit of a different query performing faster (at least for most queries in most systems), the balance of risks would argue for leaving the stats alone if there is no problem you're trying to solve.
    If your system does actually use global statistics and there are performance problems that you believe are potentially attributable to stale global statistics and your partition level statistics are accurate, you can gather just global statistics on the table probably with a reasonably small sample size. Make sure, though, that you back up your existing statistics just in case a query plan goes south. Ideally, you'd also have a test environment with identical (or nearly identical) data volumes that you could use to verify that gathering statistics doesn't cause any problems.
    Justin

  • Storing more than 15 lakhs records in TreeMap

    Hi All,
    I have a requirement to cache more than 15 lakh records in TreeMap. When i tested it, it takes around one hour for 15 lakh records.
    Is there any better way to cache this huge volume of records?
    15 lakh records may increase in future. So my application should be able to cache huge volume to process.
    Please suggest a solution as how to proceed with this....
    Thanks in Advance.

    Yes it takes around 1 hour.
    Please find the sample code
    public void fetchFromDB(TreeMap values)
    throws SQLException
    String query ="My Query";
    try
    stmt = con.prepareStatement(query);
    rs = stmt.executeQuery();
    while(rs.next())
    String name = null;
    String id= null;
    if(rs.getString(1) != null)
    name = rs.getString(1).trim();
    if(rs.getString(2) != null)
    id= rs.getString(2).trim();
    map.put(name, id);
    catch{ }
    I dont think there is much in this code for slow implementation. But still pls let me know if iam going wrong somewhere.....

  • Not able to update more than 10,000 records in CT04 for a characteristic

    Hi all,
    We are not able to update more than 10,000 records in CT04 for a certain characteristic.
    Is there any possible way to do this?
    Please advise...its a production issue.
    Thanks.

    Hello ,
    Please consider using a check table for the characteristic involved if you are working with large
    number of values assigned
    With a check table you have a possibility to work with a huge amount of values , also the performance should improve                          
    Please refer to the link
    http://help.sap.com/saphelp_erp60_sp/helpdata/en/ec/62ae27416a11d1896d0000e8322d00/frameset.htm
    Section - Entering a Check Table 
    Hopefully the information helps
    Thanks
    Enda.

  • General Scenario- Adding columns into a table with more than 100 million rows

    I was asked/given a scenario, what issues do you encounter when you try to add new columns to a table with more than 200 million rows? How do you overcome those?
    Thanks in advance.
    svk

    For such a large table, it is better to add the new column to the end of the table to avoid any performance impact, as RSingh suggested.
    Also avoid to use any default on the newly created statement, or SQL Server will have to fill up 200 million fields with this default value. If you need one, add an empty column and update the column by using small batches (otherwise you lock up the whole
    table). Add the default after all the rows have a value for the new column.

  • How to get data from large table (more than 9 million rows) by EJB?

    I have a giant table, it has more than 9 million rows.
    I want to use ejb finders method to get data from this table but always get not enough memory error or time out error,
    Can anyone give me solutions?
    Thx

    Your problem may be that you are simply trying to load so many objects (found by your finder) that you are exceeding available memory. For example if each object is 100 bytes and you try to load 1,000,000 objects thats 100Mb of memory gone.
    You could try increasing the amount of memory available to OC4J with the appropriate argument on the command line (or in the 10gAS console). For example to make 1Gb available to OC4J you would add the argument:
    -Xmx1000m
    Of course you need have this available as hard memory on your server or you will incur serious swapping.
    Chris

  • Fastest way to Delete More than 20 Million Rows

    Hi
    We have a Huge database with about 20 tables having more than 1 billion rows each.
    I need to delete more than 20 million of rows from each of those tables and the delete should also reflect on the Standby databases. Please suggest me a Fastest and best approach.
    P.S : I cannot perform 'Create table new as select * from old where ....' due to lack of Disk Space.
    Regards
    Mudassir.

    Mudassir wrote:
    We have a Huge database with about 20 tables having more than 1 billion rows each.
    I need to delete more than 20 million of rows from each of those tables and the delete should also reflect on the Standby databases. Please suggest me a Fastest and best approach.
    P.S : I cannot perform 'Create table new as select * from old where ....' due to lack of Disk Space.
    Do you need the entire delete to complete as a single transaction, or can you delete and commit each table separately ?
    How many indexes do you have on these tables, and are they split between local and global.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
    There is a +"Preview"+ tab at the top of the text entry panel. Use this to check what your message will look like before you post the message. If it looks a complete mess you're unlikely to get a response. (Click on the +"Plain text"+ tab if you want to edit the text to tidy it up.)
    "Science is more than a body of knowledge; it is a way of thinking"
    Carl Sagan                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • MDX result contains many cells. (more than 1 million)

    Hi experts!!!
    I have a webi report, but when i try to refresh te report, the report show me the following message:
    The database error text is: Error in MDDataSetBW.GetCellData. MDX result contains many cells. (more than 1 million). (WIS 10901)
    I was reading that this error is because the report has too many information but it can be solve.
    I was reading the following sap notes:
    1232751 (this note is for sap bw release 7)
    931479 (this note is for sap bw 3.5 Package 17)
    We have SAP BW 3.5 and package 22, for this reason this notes are not for me.
    Do you know some sap note that solve this problem and works with my sap bw??
    I will wait for your answer.
    Ruddy Alvarado.

    Hi!!
    This is my mdx query:
    [Z_0SD_C03/ZSD_BO_QUERY_JR]: SELECT  { [Measures].[4LGOGRXW55RLOXZEFNWEPC11Z], [Measures].[4LB85FFRHOZZJ7NWNWG082RTJ], [Measures].[4LGOGWMVQB172PVA01BWUJ8T3], [Measures].[4LGOGS5KO4DB7KIULHYQZDZRR], [Measures].[4LGOGSD972Z0Q72ARC139FYHJ], [Measures].[4LGOGSKXQ1KQ8TLQX63FJHX7B], [Measures].[4LGOGSSM906FRG57305RTJVX3], [Measures].[4LGOGT0ARYS5A2ON8U843LUMV], [Measures].[4LGOGT7ZAXDUSP83EOAGDNTCN], [Measures].[4LGOGWF77CFHK3BTU79KKHA3B], [Measures].[4LHBQ76F39D7U6OU6M260ZVBR] }  ON COLUMNS , NON EMPTY CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( [0CUST_SALES__0SALES_DIST].[LEVEL01].MEMBERS, [ZSIOCOPRO__ZSIOSABOR].[LEVEL01].MEMBERS ), [0CALDAY].[LEVEL01].MEMBERS ), [ZSIOCOPRO__ZSIOTAMAN].[LEVEL01].MEMBERS ), [0MATERIAL__0MATL_TYPE].[LEVEL01].MEMBERS ), [0CUST_SALES__0SALES_GRP].[LEVEL01].MEMBERS ), [0CUST_SALES__0SALES_OFF].[LEVEL01].MEMBERS ),  { [0SOLD_TO__0REGION].[GT G20] }  ) DIMENSION PROPERTIES [0CALDAY].[20CALDAY], [0CUST_SALES__0SALES_DIST].[10CUST_SALES__0SALES_DIST], [0CUST_SALES__0SALES_DIST].[20CUST_SALES__0SALES_DIST], [0CUST_SALES__0SALES_GRP].[10CUST_SALES__0SALES_GRP], [0CUST_SALES__0SALES_OFF].[10CUST_SALES__0SALES_OFF], [0MATERIAL__0MATL_TYPE].[50MATERIAL__0MATL_TYPE], [0SOLD_TO__0REGION].[10SOLD_TO__0REGION], [ZSIOCOPRO__ZSIOSABOR].[5ZSIOCOPRO__ZSIOSABOR], [ZSIOCOPRO__ZSIOTAMAN].[5ZSIOCOPRO__ZSIOTAMAN] ON ROWS FROM [Z_0SD_C03/ZSD_BO_QUERY_JR]
    [Z_0SD_C03/ZSD_BO_QUERY_JR]: SELECT  { [Measures].[4LGOGWMVQB172PVA01BWUJ8T3], [Measures].[4LHBQ76F39D7U6OU6M260ZVBR], [Measures].[4LB85FFRHOZZJ7NWNWG082RTJ] }  ON COLUMNS , NON EMPTY CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( [0CUST_SALES__0SALES_OFF].[LEVEL01].MEMBERS,  { [0SOLD_TO__0REGION].[GT G20] }  ), [0CUST_SALES__0SALES_GRP].[LEVEL01].MEMBERS ), [0CUST_SALES__0SALES_DIST].[LEVEL01].MEMBERS ), [0MATERIAL__0MATL_TYPE].[LEVEL01].MEMBERS ), [ZSIOCOPRO__ZSIOTAMAN].[LEVEL01].MEMBERS ) DIMENSION PROPERTIES [0CUST_SALES__0SALES_DIST].[20CUST_SALES__0SALES_DIST], [0CUST_SALES__0SALES_GRP].[10CUST_SALES__0SALES_GRP], [0CUST_SALES__0SALES_OFF].[10CUST_SALES__0SALES_OFF], [0MATERIAL__0MATL_TYPE].[50MATERIAL__0MATL_TYPE], [0SOLD_TO__0REGION].[10SOLD_TO__0REGION], [ZSIOCOPRO__ZSIOTAMAN].[5ZSIOCOPRO__ZSIOTAMAN] ON ROWS FROM [Z_0SD_C03/ZSD_BO_QUERY_JR]
    [Z_0SD_C03/ZSD_BO_QUERY_JR]: SELECT  { [Measures].[4LGOGWMVQB172PVA01BWUJ8T3], [Measures].[4LHBQ76F39D7U6OU6M260ZVBR], [Measures].[4LB85FFRHOZZJ7NWNWG082RTJ] }  ON COLUMNS , NON EMPTY CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( [0CUST_SALES__0SALES_OFF].[LEVEL01].MEMBERS,  { [0SOLD_TO__0REGION].[GT G20] }  ), [0CUST_SALES__0SALES_GRP].[LEVEL01].MEMBERS ), [0CUST_SALES__0SALES_DIST].[LEVEL01].MEMBERS ), [0MATERIAL__0MATL_TYPE].[LEVEL01].MEMBERS ) DIMENSION PROPERTIES [0CUST_SALES__0SALES_DIST].[20CUST_SALES__0SALES_DIST], [0CUST_SALES__0SALES_GRP].[10CUST_SALES__0SALES_GRP], [0CUST_SALES__0SALES_OFF].[10CUST_SALES__0SALES_OFF], [0MATERIAL__0MATL_TYPE].[50MATERIAL__0MATL_TYPE], [0SOLD_TO__0REGION].[10SOLD_TO__0REGION] ON ROWS FROM [Z_0SD_C03/ZSD_BO_QUERY_JR]
    [Z_0SD_C03/ZSD_BO_QUERY_JR]: SELECT  { [Measures].[4LGOGWMVQB172PVA01BWUJ8T3], [Measures].[4LHBQ76F39D7U6OU6M260ZVBR], [Measures].[4LB85FFRHOZZJ7NWNWG082RTJ] }  ON COLUMNS , NON EMPTY CROSSJOIN( CROSSJOIN( CROSSJOIN( [0CUST_SALES__0SALES_OFF].[LEVEL01].MEMBERS,  { [0SOLD_TO__0REGION].[GT G20] }  ), [0CUST_SALES__0SALES_GRP].[LEVEL01].MEMBERS ), [0CUST_SALES__0SALES_DIST].[LEVEL01].MEMBERS ) DIMENSION PROPERTIES [0CUST_SALES__0SALES_DIST].[20CUST_SALES__0SALES_DIST], [0CUST_SALES__0SALES_GRP].[10CUST_SALES__0SALES_GRP], [0CUST_SALES__0SALES_OFF].[10CUST_SALES__0SALES_OFF], [0SOLD_TO__0REGION].[10SOLD_TO__0REGION] ON ROWS FROM [Z_0SD_C03/ZSD_BO_QUERY_JR]
    [Z_0SD_C03/ZSD_BO_QUERY_JR]: SELECT  { [Measures].[4LGOGWMVQB172PVA01BWUJ8T3], [Measures].[4LHBQ76F39D7U6OU6M260ZVBR], [Measures].[4LB85FFRHOZZJ7NWNWG082RTJ] }  ON COLUMNS , NON EMPTY CROSSJOIN( CROSSJOIN( [0CUST_SALES__0SALES_OFF].[LEVEL01].MEMBERS,  { [0SOLD_TO__0REGION].[GT G20] }  ), [0CUST_SALES__0SALES_GRP].[LEVEL01].MEMBERS ) DIMENSION PROPERTIES [0CUST_SALES__0SALES_GRP].[10CUST_SALES__0SALES_GRP], [0CUST_SALES__0SALES_OFF].[10CUST_SALES__0SALES_OFF], [0SOLD_TO__0REGION].[10SOLD_TO__0REGION] ON ROWS FROM [Z_0SD_C03/ZSD_BO_QUERY_JR]
    [Z_0SD_C03/ZSD_BO_QUERY_JR]: SELECT  { [Measures].[4LGOGWMVQB172PVA01BWUJ8T3], [Measures].[4LHBQ76F39D7U6OU6M260ZVBR], [Measures].[4LB85FFRHOZZJ7NWNWG082RTJ] }  ON COLUMNS , NON EMPTY CROSSJOIN( [0CUST_SALES__0SALES_OFF].[LEVEL01].MEMBERS,  { [0SOLD_TO__0REGION].[GT G20] }  ) DIMENSION PROPERTIES [0CUST_SALES__0SALES_OFF].[10CUST_SALES__0SALES_OFF], [0SOLD_TO__0REGION].[10SOLD_TO__0REGION] ON ROWS FROM [Z_0SD_C03/ZSD_BO_QUERY_JR]
    [Z_0SD_C03/ZSD_BO_QUERY_JR]: SELECT  { [Measures].[4LGOGWMVQB172PVA01BWUJ8T3], [Measures].[4LHBQ76F39D7U6OU6M260ZVBR], [Measures].[4LB85FFRHOZZJ7NWNWG082RTJ] }  ON COLUMNS , NON EMPTY  { [0SOLD_TO__0REGION].[GT G20] }  DIMENSION PROPERTIES [0SOLD_TO__0REGION].[10SOLD_TO__0REGION] ON ROWS FROM [Z_0SD_C03/ZSD_BO_QUERY_JR]
    [ZSIOCTCOR/ZSIO_BO_CXC]: SELECT  { [Measures].[3Z4D1BQHVZ7VR5IJWHLKNPIL8], [Measures].[3Z4D1BY6EXTL9S202BNWXRHB0], [Measures].[3Z4D1BITD0M68IZ3QNJ8DNJVG], [Measures].[3Z4D1BB4U20GPWFNKTGW3LL5O], [Measures].[3Z4D1B3GB3ER79W7EZEJTJMFW], [Measures].[3Z4D1AGEQ7LMNE9UXH7IZDQAK], [Measures].[3Z4D1AVRS4T1ONCR95C7JHNQ4], [Measures].[480ABQOBFURXYE3L9IUXLSHNB] }  ON COLUMNS , NON EMPTY CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( [ZSIOCLIE__ZSIOTCOB].[LEVEL01].MEMBERS, [ZSIOCLIE__ZSIORUTA].[LEVEL01].MEMBERS ), [ZSIODAFA].[LEVEL01].MEMBERS ), [ZSIOCLIE].[LEVEL01].MEMBERS ), [ZSIOCLIE__ZSIOCET].[LEVEL01].MEMBERS ), [ZSIOAGENC].[LEVEL01].MEMBERS ),  { [ZSIOCLIE__ZSIOREGIO].[90120] }  ) DIMENSION PROPERTIES [ZSIOAGENC].[5ZSIOAGENC], [ZSIOCLIE].[2ZSIOCOSIM], [ZSIOCLIE].[2ZSIOLICRE], [ZSIOCLIE].[4ZSIOCLIE], [ZSIOCLIE__ZSIOCET].[5ZSIOCLIE__ZSIOCET], [ZSIOCLIE__ZSIOREGIO].[5ZSIOCLIE__ZSIOREGIO], [ZSIOCLIE__ZSIORUTA].[5ZSIOCLIE__ZSIORUTA], [ZSIOCLIE__ZSIOTCOB].[1ZSIOCLIE__ZSIOTCOB], [ZSIODAFA].[2ZSIODAFA] ON ROWS FROM [ZSIOCTCOR/ZSIO_BO_CXC]
    [Z_0SD_C03/ZSD_BO_QUERY_JR_COBERTURA]: SELECT  { [Measures].[4LGOQ5A0UT58OU46P0PO9LN4N] }  ON COLUMNS , NON EMPTY CROSSJOIN( CROSSJOIN( CROSSJOIN( [0CUST_SALES__0SALES_GRP].[LEVEL01].MEMBERS, [0CALDAY].[LEVEL01].MEMBERS ), [0MATERIAL__0MATL_TYPE].[LEVEL01].MEMBERS ),  { [0SOLD_TO__0REGION].[GT G20] }  ) DIMENSION PROPERTIES [0CALDAY].[20CALDAY], [0CUST_SALES__0SALES_GRP].[10CUST_SALES__0SALES_GRP], [0MATERIAL__0MATL_TYPE].[50MATERIAL__0MATL_TYPE], [0SOLD_TO__0REGION].[10SOLD_TO__0REGION] ON ROWS FROM [Z_0SD_C03/ZSD_BO_QUERY_JR_COBERTURA]
    [Z_0SD_C03/ZSD_BO_QUERY_JR_COBERTURA]: SELECT  { [Measures].[4LGOQ5A0UT58OU46P0PO9LN4N] }  ON COLUMNS , NON EMPTY CROSSJOIN( CROSSJOIN( [0MATERIAL__0MATL_TYPE].[LEVEL01].MEMBERS,  { [0SOLD_TO__0REGION].[GT G20] }  ), [0CUST_SALES__0SALES_GRP].[LEVEL01].MEMBERS ) DIMENSION PROPERTIES [0CUST_SALES__0SALES_GRP].[10CUST_SALES__0SALES_GRP], [0MATERIAL__0MATL_TYPE].[50MATERIAL__0MATL_TYPE], [0SOLD_TO__0REGION].[10SOLD_TO__0REGION] ON ROWS FROM [Z_0SD_C03/ZSD_BO_QUERY_JR_COBERTURA]
    [Z_0SD_C03/ZSD_BO_QUERY_JR_COBERTURA]: SELECT  { [Measures].[4LGOQ5A0UT58OU46P0PO9LN4N] }  ON COLUMNS , NON EMPTY  { [0SOLD_TO__0REGION].[GT G20] }  DIMENSION PROPERTIES [0SOLD_TO__0REGION].[10SOLD_TO__0REGION] ON ROWS FROM [Z_0SD_C03/ZSD_BO_QUERY_JR_COBERTURA]
    [Z_0SD_C03/ZSD_BO_QUERY_JR_COBERTURA]: SELECT  { [Measures].[4LGOQ5A0UT58OU46P0PO9LN4N] }  ON COLUMNS , NON EMPTY CROSSJOIN( [0MATERIAL__0MATL_TYPE].[LEVEL01].MEMBERS,  { [0SOLD_TO__0REGION].[GT G20] }  ) DIMENSION PROPERTIES [0MATERIAL__0MATL_TYPE].[50MATERIAL__0MATL_TYPE], [0SOLD_TO__0REGION].[10SOLD_TO__0REGION] ON ROWS FROM [Z_0SD_C03/ZSD_BO_QUERY_JR_COBERTURA]
    [Z_0SD_C03/ZSD_BO_QUERY_JR_COBERTURA]: SELECT  { [Measures].[4LGOQ5A0UT58OU46P0PO9LN4N] }  ON COLUMNS , NON EMPTY CROSSJOIN( [0CALDAY].[LEVEL01].MEMBERS,  { [0SOLD_TO__0REGION].[GT G20] }  ) DIMENSION PROPERTIES [0CALDAY].[20CALDAY], [0SOLD_TO__0REGION].[10SOLD_TO__0REGION] ON ROWS FROM [Z_0SD_C03/ZSD_BO_QUERY_JR_COBERTURA]
    [Z_0SD_C03/ZSD_BO_QUERY_JR_COBERTURA]: SELECT  { [Measures].[4LGOQ5A0UT58OU46P0PO9LN4N] }  ON COLUMNS , NON EMPTY CROSSJOIN( CROSSJOIN( [0CALDAY].[LEVEL01].MEMBERS,  { [0SOLD_TO__0REGION].[GT G20] }  ), [0MATERIAL__0MATL_TYPE].[LEVEL01].MEMBERS ) DIMENSION PROPERTIES [0CALDAY].[20CALDAY], [0MATERIAL__0MATL_TYPE].[50MATERIAL__0MATL_TYPE], [0SOLD_TO__0REGION].[10SOLD_TO__0REGION] ON ROWS FROM [Z_0SD_C03/ZSD_BO_QUERY_JR_COBERTURA]
    How to put it on mdxtest??

  • More than 1 million files on multi-terabyte UFS file systems

    How do you configure a UFS file system for more than 1 million files when it exceeds 1 terabyte? I've got several Sun RAID subsystems where this is necessary.

    Thanks. You are right on. According to Sun official channels:
    Paula Van Wie wrote:
    Hi Ron,
    This is what I've found out.
    No there is no way around the limitation. I would suggest an alternate
    file system if possible suggest ZFS as they would get the most space
    available as inodes are no longer used.
    Like the customer noted if the inode values were increased significantly
    and an fsck were required there is the possibility that the fsck could
    take days or weeks to complete. So in order to avoid angry customers
    having to wait a day or two for fsck to finish the limit was imposed.
    And so far I've heard that there should not be corruption using zfs and
    raid.
    Paula

  • How to get more than one lack record in 1 or 2 seconds

    pls help its urgent ,
    i need to retrieve more record from different table it have more than one lack record ,and its more than 20 seconds ,how to minimise the time to one seconds
    My sql:
    SELECT
    tl.ProjectID,
    pr.jobname,
    name as Department_name,
    ChargeNum,
    (ac.ActivityCode ||':'||ac.SubCode) as ActivityCodeName,
    SUM(HoursWorked), (Case When isBilled=1 or billedRate<>0 then BilledRate else ppr.Rate End) as RATE
    FROM
    TimeLogEntries tl INNER JOIN activitycodes ac on ac.ACTIVITYCODEID=tl.ACTIVITYCODEID INNER JOIN projectrates ppr on tl.ACTIVITYCODEID = ppr.ACTIVITYCODEID and tl.projectid=ppr.projectid ,
    projects pr INNER JOIN departments d on d.DEPARTMENTID =pr.REVENUECENTERID
    WHERE
    to_char(Date_,'yyyy-mm-dd') BETWEEN '2006-01-01' and '2008-12-30'
    AND
    tl.ProjectID = pr.ProjectID
    Group By
    tl.ProjectID,
    tl.ActivityCodeID,
    BilledRate,
    ChargeNum,
    pr.jobname,
    name,
    (ac.ActivityCode ||':'||ac.SubCode),
    (Case When isBilled=1 or billedRate<>0 then BilledRate else ppr.Rate End)
    ORDER BY
    tl.ChargeNum;

    hi,
    even i am searching for some thing similar.
    i want to have 3 calendars in one page.
    getting same message calendar already exists on page 2. You can only add one calander per page. Select a different page.
    pls help.

  • How to call the same query more than once with different selection criteria

    Hi,
    Please do anybody know how to solve this issue? I need to call one query with the fixed structure more than once with different selection criteria. For example. I have following data
    Sales organization XX
                         Income 2008  Income 2009
    Customer A       10                 20
    Customer B        30                  0
    Sales organization YY
                         Income 2008  Income 2009
    Customer A        20                5
    Customer B        50                10
    Now, I need this. At the selection screen of query, user fill variable  charakteristic "Sales organization" with interval  XX - YY, than I need to generate two separate results per sales organization, one for Sales Organization XX and the second for SO YYwhich will be displayed each on separate page, where result for SO YY will be dispayed under result for SO YY. Are there some options how to do it for example in Report Designer or WAD or with programming? In Report Designer is possible to use one query more than once, but I dont know how to force each query in RD to display result only for one Sales Organization, which will be defined in selection screen.
    Thank you very much
    J.

    Hello,
    thanks to all for cooperation. Finally we solved this issue with the following way..
    User fill appropriate SO on the selection screen, which is defined as range. This will resulte, that selected SO are listed in report below each othe (standard behavior). Required solution we achieved with the Report Designer, we set page break under each Result row of RD. This caused, that report is divided into required part per SO, which are stated each on separate page.
    J.

Maybe you are looking for

  • FTP Clients on E72: can't get any to work

    Hello again. So, I've been trying to get FTP Client functionality on my Nokia E72. I guess I'm just doing something wrong, but I cannot figure it out. I tried: * SIC! FTP (native Symbian App) * PaderSyncFTP (Java) * MobyExplorer (Java) None of the ap

  • Upgraded to 6.0.5 and itunes does not run

    I'm running Windows 2003 (i know, not officially supported...) and I upgraded itunes to 6.0.5. itunes no longer runs. I get an application error on start with " iTunes.exe - Application Error" "The exception Privileged instruction. (0xc0000096) occur

  • HT3775 MPG movies on my MacBook Pro...

    I am trying to play movies from my Sony camera on my MacBook Pro.  The movies are formatted in a MPG file format.  When I click on the videos they will not play - the format is not recognized... What do I need to do so that I can watch these movies? 

  • HT4783 why is my desk top mac not appearing in my mac air air drop window? both wifi right next to each other

    why can't i see my imac on my mac air when i am using air drop on both with a wifi connection ,both same , both nearby?

  • G510 laptop screen blinks

    I have the G510 laptop and my screen just suddenly started blinking, like to a blue screen and comes back to the desktop. The laptop is running but I cannot click on anything or if I can, it is not working or is super slow. I haven't installed anythi