More than 1 million files on multi-terabyte UFS file systems

How do you configure a UFS file system for more than 1 million files when it exceeds 1 terabyte? I've got several Sun RAID subsystems where this is necessary.

Thanks. You are right on. According to Sun official channels:
Paula Van Wie wrote:
Hi Ron,
This is what I've found out.
No there is no way around the limitation. I would suggest an alternate
file system if possible suggest ZFS as they would get the most space
available as inodes are no longer used.
Like the customer noted if the inode values were increased significantly
and an fsck were required there is the possibility that the fsck could
take days or weeks to complete. So in order to avoid angry customers
having to wait a day or two for fsck to finish the limit was imposed.
And so far I've heard that there should not be corruption using zfs and
raid.
Paula

Similar Messages

  • Fastest way to Delete More than 20 Million Rows

    Hi
    We have a Huge database with about 20 tables having more than 1 billion rows each.
    I need to delete more than 20 million of rows from each of those tables and the delete should also reflect on the Standby databases. Please suggest me a Fastest and best approach.
    P.S : I cannot perform 'Create table new as select * from old where ....' due to lack of Disk Space.
    Regards
    Mudassir.

    Mudassir wrote:
    We have a Huge database with about 20 tables having more than 1 billion rows each.
    I need to delete more than 20 million of rows from each of those tables and the delete should also reflect on the Standby databases. Please suggest me a Fastest and best approach.
    P.S : I cannot perform 'Create table new as select * from old where ....' due to lack of Disk Space.
    Do you need the entire delete to complete as a single transaction, or can you delete and commit each table separately ?
    How many indexes do you have on these tables, and are they split between local and global.
    Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk
    To post code, statspack/AWR report, execution plans or trace files, start and end the section with the tag {noformat}{noformat} (lowercase, curly brackets, no spaces) so that the text appears in fixed format.
    There is a +"Preview"+ tab at the top of the text entry panel. Use this to check what your message will look like before you post the message. If it looks a complete mess you're unlikely to get a response. (Click on the +"Plain text"+ tab if you want to edit the text to tidy it up.)
    "Science is more than a body of knowledge; it is a way of thinking"
    Carl Sagan                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Error MDX result contains too many cells (more than 1 million). (WIS 10901)

    Hi,
    We have developed an universe on BI query and developed report on it. But while running this BO query in Web Intelligence we get the following error
    A database error occured. The database error text is: Error in MDDataSetBW.GetCellData.  MDX result contains too many cells (more than 1 million). (WIS 10901)
    This BO query is restricted for one document number.
    Now when i check in the BI cube there are not more than 300-400 records for that document number.
    If i restrict the BO query for document number, delivery number, material and acknowledged date then the query runs successfully.
    Can anyone please help with this issue.

    follow this article to get the mdx generated by the webi report.
    http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/90b02218-d909-2e10-1988-a2ca74547900
    then try to execute the same in mdxtest transaction in bw

  • How to open more than one RAW-File by drag&drop/doubleclick with CS2?

    Hello,
    is it possible to open more than one RAW-file from iPhoto with Photoshop's RAW-converter in just one single step? If I drag&drop my selection to my Photoshop-Icon, PS opens just the iPhoto-created library-JPGs of the RAW-files but not the RAWs itself (I've turned out the copyying of the imported photos to the iPhoto-library. Instead iPhoto seems to create a JPG for the library from every RAW I import). If I doubleclick a selection of RAWs in iPhoto just the doubleclicked RAW-File will be opened in Photoshop. Is it possible to select more than one RAW-file in iPhoto and get Photoshop to open all the selected RAWs but not the JPGs? It would make my work a lot easier... thank you.
    rockenbaby

    hello
    try selecting the photos that you want however many- go, File then Export.
    you can make all your file selections image size etc and then save them direct to your PS file.
    hope this helps.
    cjp

  • Am I able to use more than 1 audio file in IDVD?

    Am I able to use more than 1 audio file in my IDVD project? I want to have different songs in my subtitles.

    Yes.  Each menu had an info/settings page that you can add a different audio track to:
    When you add a track to a menu the the playing time should be set to 30 seconds or 1 minute.  If you leave it at the full lenght that full length will apply towards the 15 minutes total allowed for all menus.
    OT

  • Analyse a partitioned table with more than 50 million rows

    Hi,
    I have a partitioned table with more than 50 million rows. The last analyse is on 1/25/2007. Do I need to analyse him? (query runs on this table is very slow).
    If I need to analyse him, what is the best way? Use DBMS_STATS and schedule a job?
    Thanks

    A partitioned table has global statistics as well as partition (and subpartition if the table is subpartitioned) statistics. My guess is that you mean to say that the last time that global statistics were gathered was in 2007. Is that guess accurate? Are the partition-level statistics more recent?
    Do any of your queries actually use global statistics? Or would you expect that every query involving this table would specify one or more values for the partitioning key and thus force partition pruning to take place? If all your queries are doing partition pruning, global statistics are irrelevant, so it doesn't matter how old and out of date they are.
    Are you seeing any performance problems that are potentially attributable to stale statistics on this table? If you're not seeing any performance problems, leaving the statistics well enough alone may be the most prudent course of action. Gathering statistics would only have the potential to change query plans. And since the cost of a query plan regressing is orders of magnitude greater than the benefit of a different query performing faster (at least for most queries in most systems), the balance of risks would argue for leaving the stats alone if there is no problem you're trying to solve.
    If your system does actually use global statistics and there are performance problems that you believe are potentially attributable to stale global statistics and your partition level statistics are accurate, you can gather just global statistics on the table probably with a reasonably small sample size. Make sure, though, that you back up your existing statistics just in case a query plan goes south. Ideally, you'd also have a test environment with identical (or nearly identical) data volumes that you could use to verify that gathering statistics doesn't cause any problems.
    Justin

  • Handling internal table with more than 1 million record

    Hi All,
    We are facing dump for storage parameters wrongly set.
    Basically the dump is due to the internal table having more than 1 million records. We have increased the storage parameter size from 512 to 2048 , still the dump is happening.
    Please advice is there any other way to handle these kinds of internal table.
    P:S we have tried the option of using hashed table, this does not suits our scenario.
    Thanks and Regards,
    Vijay

    Hi
    your problem can be solved by populating the internal table in chunks. for that you have to use Database Cursor concept.
    hope this code helps.
    G_PACKAGE_SIZE = 50000.
      * Using DB Cursor to fetch data in batch.
      OPEN CURSOR WITH HOLD DB_CURSOR FOR
             SELECT *
               FROM ZTABLE.
        DO.
        FETCH NEXT CURSOR DB_CURSOR
           INTO CORRESPONDING FIELDS OF TABLE IT_ZTABLE
           PACKAGE SIZE G_PACKAGE_SIZE.
        IF SY-SUBRC NE 0.
          CLOSE CURSOR DB_CURSOR.
          EXIT.
        ENDIF.

  • General Scenario- Adding columns into a table with more than 100 million rows

    I was asked/given a scenario, what issues do you encounter when you try to add new columns to a table with more than 200 million rows? How do you overcome those?
    Thanks in advance.
    svk

    For such a large table, it is better to add the new column to the end of the table to avoid any performance impact, as RSingh suggested.
    Also avoid to use any default on the newly created statement, or SQL Server will have to fill up 200 million fields with this default value. If you need one, add an empty column and update the column by using small batches (otherwise you lock up the whole
    table). Add the default after all the rows have a value for the new column.

  • Is it Possible to Play More than 1 Audio File Parallelly ?

    Friends,
    Is it possible to run more than 1 audio file at a time using Player or Processor ???
    For example, i have
    Audio1.MP3, which play for 2 minutes (which has only music).
    Audio2.MP3, which play for 1 minute ( which has only voice).
    I want to start with Audio1.MP3 and when it reach 1 minute, Audio2.MP3 should parallelly run with Audio1.MP3, so that 2 audio file should finish at same time.
    I want my music and voice should be mixed... (i.e) i want to mix two audio tracks..
    Is it possible thru JMF ????
    Regards
    Senthil

    I dont know if its possible with JMF, but it is with javax.sound
    You write a method to play a file, and then you create two threads where each one plays a different sound file, and tell one of them to wait 60000 millis.
    R. Hollenstein

  • How to get data from large table (more than 9 million rows) by EJB?

    I have a giant table, it has more than 9 million rows.
    I want to use ejb finders method to get data from this table but always get not enough memory error or time out error,
    Can anyone give me solutions?
    Thx

    Your problem may be that you are simply trying to load so many objects (found by your finder) that you are exceeding available memory. For example if each object is 100 bytes and you try to load 1,000,000 objects thats 100Mb of memory gone.
    You could try increasing the amount of memory available to OC4J with the appropriate argument on the command line (or in the 10gAS console). For example to make 1Gb available to OC4J you would add the argument:
    -Xmx1000m
    Of course you need have this available as hard memory on your server or you will incur serious swapping.
    Chris

  • Increase performance query more than 10 millions records significantly

    The story is :
    Everyday, there is more than 10 million records which the data in textfiles format (.csv(comma separated value) extension, or other else).
    Example textfiles name is transaction.csv
    Phone_Number
    6281381789999
    658889999888
    618887897
    etc .. more than 10 million rows
    From transaction.csv then split to 3 RAM (memory) tables :
    1st. table nation (nation_id, nation_desc)
    2nd. table operator(operator_id, operator_desc)
    3rd. table area(area_id, area_desc)
    Then query this 3 RAM tables to result physical EXT_TRANSACTION (in harddisk)
    Given physical External Oracle table name EXT_TRANSACTION with column result is :
    Phone_Number Nation_Desc Operator_Desc Area_Desc
    ======================================
    6281381789999 INA SMP SBY
    So : Textfiles (transaction.csv) --> RAM tables --> Oracle tables (EXT_TRANSACTION)
    The first 2 digits is nation_id, next 4 digits is operator_id, and next 2 digits is area_id.
    I ever heard, to increase performance significantly, there is a technique to create table in memory (RAM) and not in harddisk.
    Any advice would be very appreciate.
    Thanks.

    Oracle uses sophisticated algorithms for various memory caches, including buffering data in memory. It is described in Oracle® Database Concepts.
    You can tell Oracle via the CACHE table clause to keep blocks for that table in the buffer cache (refer to the URL for the technical details of how this is done).
    However, this means there are now less of the buffer cache available to cache other data often used. So this approach could make accessing one table a bit faster at the expense of making access to other tables slower.
    This is a balancing act - how much can one "interfere" with cache before affecting and downgrading performance. Oracle also recommends that this type of "forced" caching is use for small lookup tables. It is not a good idea to use this on large tables.
    As for your problem - why do you assume that keeping data in memory will make processing faster? That is a very limited approach. Memory is a resource that is in high demand. It is a very finite resource. It needs to be carefully spend to get the best and optimal performance.
    The buffer cache is designed to cache "hot" (often accessed) data blocks. So in all likelihood, telling Oracle to cache a table you use a lot is not going to make it faster. Oracle is already caching the hot data blocks as best possible.
    You also need to consider what the actual performance problem is. If your process needs to crunch tons of data, it is going to be slow. Throwing more memory will be treating the symptom - not the actual problem that tons of data are being processed.
    So you need to define the actual problem. Perhaps it is not slow I/O - there could be a user defined PL/SQL function used as part of the ELT process that causes the problem. Parallel processing could be use to do more I/O at the same time (assuming the I/O subsystem has the capacity). The process can perhaps be designed better - and instead of multiple passes through a data set, crunching the same data (but different columns) again and again, do it in a single pass.
    10 million rows are nothing ito what Oracle can process on even a small server today. I have dual CPU AMD servers doing over 2,000 inserts per second in a single process. A Perl program making up to a 1,000 PL/SQL procedure calls per second. Oracle is extremely capable - as it today's hardware and software. But that needs a sound software engineering approach. And that approach says that we first need to fully understand the problem before we can solve it, treating the cause and not the symptom.

  • Import more than 1 dump file

    Kindly some one help me , how to specify more than 1 dump file in the para file. I know how to spcify one dump file in the parfile. I dont know how to specify
    2 dump file to load as my export produces 2 dump files as the limit of file size.

    File: exp.par
    FILESIZE=1024MB
    FILE=exp_f1.dmp,exp_f2.dmp,
    exp_f3.dmp,exp_f4.dmp
    FULL=Y
    DIRECT=Y
    LOG=exp_full.log
    % exp system/manager PARFILE=exp.par

  • How to Include more than one jar files

    My application uses more than one jar files. And all the jar files i am using are signed. when i include the jar files, i am including them in the resources section of the ".jnlp" file . The section of code looks like this:
    <resources>
    <jar href="utestfw.jar" main="true" download="eager"/>
    <jar href="crimson.jar"/>
    <jar href="jaxp.jar"/>
    </resources>
    <application-desc main-class="utestfw.ObjectBrowser"/>
    The deployment of the application is successful. The jar file - "utestfw.jar" contains the application main class when the application is launched it works, but when i choose the feature which involves the classes of either "crimson.jar" or "jaxp.jar" , the application terminates and the webstart console and the application is closed automatically. Am i wrong in adding the jar files? what is the way to add more than one jar files to the ".jnlp" file.
    Can anyone please help me?
    -Aparna

    I'll post u the jnlp file which i am using. To sign all the jars i've used the same alias ( a self signed certificate using keytool and jarsigner).Here is the jnlp file:
    <?xml version="1.0" encoding="utf-8"?>
    <jnlp spec="0.2 1.0"
    codebase="http://127.0.0.1:8080/healthdec"
    href="testTool.jnlp">
    <information>
    <title>Unit Test Manager </title>
    <vendor>Sun Microsystems, Inc.</vendor>
    <description>A minimalist drawing application along the lines of Illustrator</description>
    <icon href="images/testing.gif"/>
    <offline-allowed/>
    </information>
    <resources>
    <j2se version="1.3+ 1.2+"/>
    <jar href="utestfw.jar" main="true" download="eager"/>
    <jar href="crimson.jar" main="false" download="eager"/>
    <jar href="jaxp.jar" main="false" download="eager"/>
    </resources>
    <application-desc main-class="utestfw.ObjectBrowser"/>
    <security>
    <all-permissions/>
    </security>
    </jnlp>
    Can u please look at the above code and suggest me where i am wrong. Thanks.
    -Regards
    Aparna

  • More than 2 OCR files and 3 VOTE files

    Is it possible to add more than 2 OCR files and more than 3 VOTE files in 2-node RAC ?

    The Oracle Clusterware enables multiple voting disks but you must have an odd number of voting disks, such as three, five, and so on. If you define a single voting disk, then you should use external mirroring to provide redundancy.
    Regard's
    Awanish Kumar

  • MDX result contains many cells. (more than 1 million)

    Hi experts!!!
    I have a webi report, but when i try to refresh te report, the report show me the following message:
    The database error text is: Error in MDDataSetBW.GetCellData. MDX result contains many cells. (more than 1 million). (WIS 10901)
    I was reading that this error is because the report has too many information but it can be solve.
    I was reading the following sap notes:
    1232751 (this note is for sap bw release 7)
    931479 (this note is for sap bw 3.5 Package 17)
    We have SAP BW 3.5 and package 22, for this reason this notes are not for me.
    Do you know some sap note that solve this problem and works with my sap bw??
    I will wait for your answer.
    Ruddy Alvarado.

    Hi!!
    This is my mdx query:
    [Z_0SD_C03/ZSD_BO_QUERY_JR]: SELECT  { [Measures].[4LGOGRXW55RLOXZEFNWEPC11Z], [Measures].[4LB85FFRHOZZJ7NWNWG082RTJ], [Measures].[4LGOGWMVQB172PVA01BWUJ8T3], [Measures].[4LGOGS5KO4DB7KIULHYQZDZRR], [Measures].[4LGOGSD972Z0Q72ARC139FYHJ], [Measures].[4LGOGSKXQ1KQ8TLQX63FJHX7B], [Measures].[4LGOGSSM906FRG57305RTJVX3], [Measures].[4LGOGT0ARYS5A2ON8U843LUMV], [Measures].[4LGOGT7ZAXDUSP83EOAGDNTCN], [Measures].[4LGOGWF77CFHK3BTU79KKHA3B], [Measures].[4LHBQ76F39D7U6OU6M260ZVBR] }  ON COLUMNS , NON EMPTY CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( [0CUST_SALES__0SALES_DIST].[LEVEL01].MEMBERS, [ZSIOCOPRO__ZSIOSABOR].[LEVEL01].MEMBERS ), [0CALDAY].[LEVEL01].MEMBERS ), [ZSIOCOPRO__ZSIOTAMAN].[LEVEL01].MEMBERS ), [0MATERIAL__0MATL_TYPE].[LEVEL01].MEMBERS ), [0CUST_SALES__0SALES_GRP].[LEVEL01].MEMBERS ), [0CUST_SALES__0SALES_OFF].[LEVEL01].MEMBERS ),  { [0SOLD_TO__0REGION].[GT G20] }  ) DIMENSION PROPERTIES [0CALDAY].[20CALDAY], [0CUST_SALES__0SALES_DIST].[10CUST_SALES__0SALES_DIST], [0CUST_SALES__0SALES_DIST].[20CUST_SALES__0SALES_DIST], [0CUST_SALES__0SALES_GRP].[10CUST_SALES__0SALES_GRP], [0CUST_SALES__0SALES_OFF].[10CUST_SALES__0SALES_OFF], [0MATERIAL__0MATL_TYPE].[50MATERIAL__0MATL_TYPE], [0SOLD_TO__0REGION].[10SOLD_TO__0REGION], [ZSIOCOPRO__ZSIOSABOR].[5ZSIOCOPRO__ZSIOSABOR], [ZSIOCOPRO__ZSIOTAMAN].[5ZSIOCOPRO__ZSIOTAMAN] ON ROWS FROM [Z_0SD_C03/ZSD_BO_QUERY_JR]
    [Z_0SD_C03/ZSD_BO_QUERY_JR]: SELECT  { [Measures].[4LGOGWMVQB172PVA01BWUJ8T3], [Measures].[4LHBQ76F39D7U6OU6M260ZVBR], [Measures].[4LB85FFRHOZZJ7NWNWG082RTJ] }  ON COLUMNS , NON EMPTY CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( [0CUST_SALES__0SALES_OFF].[LEVEL01].MEMBERS,  { [0SOLD_TO__0REGION].[GT G20] }  ), [0CUST_SALES__0SALES_GRP].[LEVEL01].MEMBERS ), [0CUST_SALES__0SALES_DIST].[LEVEL01].MEMBERS ), [0MATERIAL__0MATL_TYPE].[LEVEL01].MEMBERS ), [ZSIOCOPRO__ZSIOTAMAN].[LEVEL01].MEMBERS ) DIMENSION PROPERTIES [0CUST_SALES__0SALES_DIST].[20CUST_SALES__0SALES_DIST], [0CUST_SALES__0SALES_GRP].[10CUST_SALES__0SALES_GRP], [0CUST_SALES__0SALES_OFF].[10CUST_SALES__0SALES_OFF], [0MATERIAL__0MATL_TYPE].[50MATERIAL__0MATL_TYPE], [0SOLD_TO__0REGION].[10SOLD_TO__0REGION], [ZSIOCOPRO__ZSIOTAMAN].[5ZSIOCOPRO__ZSIOTAMAN] ON ROWS FROM [Z_0SD_C03/ZSD_BO_QUERY_JR]
    [Z_0SD_C03/ZSD_BO_QUERY_JR]: SELECT  { [Measures].[4LGOGWMVQB172PVA01BWUJ8T3], [Measures].[4LHBQ76F39D7U6OU6M260ZVBR], [Measures].[4LB85FFRHOZZJ7NWNWG082RTJ] }  ON COLUMNS , NON EMPTY CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( [0CUST_SALES__0SALES_OFF].[LEVEL01].MEMBERS,  { [0SOLD_TO__0REGION].[GT G20] }  ), [0CUST_SALES__0SALES_GRP].[LEVEL01].MEMBERS ), [0CUST_SALES__0SALES_DIST].[LEVEL01].MEMBERS ), [0MATERIAL__0MATL_TYPE].[LEVEL01].MEMBERS ) DIMENSION PROPERTIES [0CUST_SALES__0SALES_DIST].[20CUST_SALES__0SALES_DIST], [0CUST_SALES__0SALES_GRP].[10CUST_SALES__0SALES_GRP], [0CUST_SALES__0SALES_OFF].[10CUST_SALES__0SALES_OFF], [0MATERIAL__0MATL_TYPE].[50MATERIAL__0MATL_TYPE], [0SOLD_TO__0REGION].[10SOLD_TO__0REGION] ON ROWS FROM [Z_0SD_C03/ZSD_BO_QUERY_JR]
    [Z_0SD_C03/ZSD_BO_QUERY_JR]: SELECT  { [Measures].[4LGOGWMVQB172PVA01BWUJ8T3], [Measures].[4LHBQ76F39D7U6OU6M260ZVBR], [Measures].[4LB85FFRHOZZJ7NWNWG082RTJ] }  ON COLUMNS , NON EMPTY CROSSJOIN( CROSSJOIN( CROSSJOIN( [0CUST_SALES__0SALES_OFF].[LEVEL01].MEMBERS,  { [0SOLD_TO__0REGION].[GT G20] }  ), [0CUST_SALES__0SALES_GRP].[LEVEL01].MEMBERS ), [0CUST_SALES__0SALES_DIST].[LEVEL01].MEMBERS ) DIMENSION PROPERTIES [0CUST_SALES__0SALES_DIST].[20CUST_SALES__0SALES_DIST], [0CUST_SALES__0SALES_GRP].[10CUST_SALES__0SALES_GRP], [0CUST_SALES__0SALES_OFF].[10CUST_SALES__0SALES_OFF], [0SOLD_TO__0REGION].[10SOLD_TO__0REGION] ON ROWS FROM [Z_0SD_C03/ZSD_BO_QUERY_JR]
    [Z_0SD_C03/ZSD_BO_QUERY_JR]: SELECT  { [Measures].[4LGOGWMVQB172PVA01BWUJ8T3], [Measures].[4LHBQ76F39D7U6OU6M260ZVBR], [Measures].[4LB85FFRHOZZJ7NWNWG082RTJ] }  ON COLUMNS , NON EMPTY CROSSJOIN( CROSSJOIN( [0CUST_SALES__0SALES_OFF].[LEVEL01].MEMBERS,  { [0SOLD_TO__0REGION].[GT G20] }  ), [0CUST_SALES__0SALES_GRP].[LEVEL01].MEMBERS ) DIMENSION PROPERTIES [0CUST_SALES__0SALES_GRP].[10CUST_SALES__0SALES_GRP], [0CUST_SALES__0SALES_OFF].[10CUST_SALES__0SALES_OFF], [0SOLD_TO__0REGION].[10SOLD_TO__0REGION] ON ROWS FROM [Z_0SD_C03/ZSD_BO_QUERY_JR]
    [Z_0SD_C03/ZSD_BO_QUERY_JR]: SELECT  { [Measures].[4LGOGWMVQB172PVA01BWUJ8T3], [Measures].[4LHBQ76F39D7U6OU6M260ZVBR], [Measures].[4LB85FFRHOZZJ7NWNWG082RTJ] }  ON COLUMNS , NON EMPTY CROSSJOIN( [0CUST_SALES__0SALES_OFF].[LEVEL01].MEMBERS,  { [0SOLD_TO__0REGION].[GT G20] }  ) DIMENSION PROPERTIES [0CUST_SALES__0SALES_OFF].[10CUST_SALES__0SALES_OFF], [0SOLD_TO__0REGION].[10SOLD_TO__0REGION] ON ROWS FROM [Z_0SD_C03/ZSD_BO_QUERY_JR]
    [Z_0SD_C03/ZSD_BO_QUERY_JR]: SELECT  { [Measures].[4LGOGWMVQB172PVA01BWUJ8T3], [Measures].[4LHBQ76F39D7U6OU6M260ZVBR], [Measures].[4LB85FFRHOZZJ7NWNWG082RTJ] }  ON COLUMNS , NON EMPTY  { [0SOLD_TO__0REGION].[GT G20] }  DIMENSION PROPERTIES [0SOLD_TO__0REGION].[10SOLD_TO__0REGION] ON ROWS FROM [Z_0SD_C03/ZSD_BO_QUERY_JR]
    [ZSIOCTCOR/ZSIO_BO_CXC]: SELECT  { [Measures].[3Z4D1BQHVZ7VR5IJWHLKNPIL8], [Measures].[3Z4D1BY6EXTL9S202BNWXRHB0], [Measures].[3Z4D1BITD0M68IZ3QNJ8DNJVG], [Measures].[3Z4D1BB4U20GPWFNKTGW3LL5O], [Measures].[3Z4D1B3GB3ER79W7EZEJTJMFW], [Measures].[3Z4D1AGEQ7LMNE9UXH7IZDQAK], [Measures].[3Z4D1AVRS4T1ONCR95C7JHNQ4], [Measures].[480ABQOBFURXYE3L9IUXLSHNB] }  ON COLUMNS , NON EMPTY CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( CROSSJOIN( [ZSIOCLIE__ZSIOTCOB].[LEVEL01].MEMBERS, [ZSIOCLIE__ZSIORUTA].[LEVEL01].MEMBERS ), [ZSIODAFA].[LEVEL01].MEMBERS ), [ZSIOCLIE].[LEVEL01].MEMBERS ), [ZSIOCLIE__ZSIOCET].[LEVEL01].MEMBERS ), [ZSIOAGENC].[LEVEL01].MEMBERS ),  { [ZSIOCLIE__ZSIOREGIO].[90120] }  ) DIMENSION PROPERTIES [ZSIOAGENC].[5ZSIOAGENC], [ZSIOCLIE].[2ZSIOCOSIM], [ZSIOCLIE].[2ZSIOLICRE], [ZSIOCLIE].[4ZSIOCLIE], [ZSIOCLIE__ZSIOCET].[5ZSIOCLIE__ZSIOCET], [ZSIOCLIE__ZSIOREGIO].[5ZSIOCLIE__ZSIOREGIO], [ZSIOCLIE__ZSIORUTA].[5ZSIOCLIE__ZSIORUTA], [ZSIOCLIE__ZSIOTCOB].[1ZSIOCLIE__ZSIOTCOB], [ZSIODAFA].[2ZSIODAFA] ON ROWS FROM [ZSIOCTCOR/ZSIO_BO_CXC]
    [Z_0SD_C03/ZSD_BO_QUERY_JR_COBERTURA]: SELECT  { [Measures].[4LGOQ5A0UT58OU46P0PO9LN4N] }  ON COLUMNS , NON EMPTY CROSSJOIN( CROSSJOIN( CROSSJOIN( [0CUST_SALES__0SALES_GRP].[LEVEL01].MEMBERS, [0CALDAY].[LEVEL01].MEMBERS ), [0MATERIAL__0MATL_TYPE].[LEVEL01].MEMBERS ),  { [0SOLD_TO__0REGION].[GT G20] }  ) DIMENSION PROPERTIES [0CALDAY].[20CALDAY], [0CUST_SALES__0SALES_GRP].[10CUST_SALES__0SALES_GRP], [0MATERIAL__0MATL_TYPE].[50MATERIAL__0MATL_TYPE], [0SOLD_TO__0REGION].[10SOLD_TO__0REGION] ON ROWS FROM [Z_0SD_C03/ZSD_BO_QUERY_JR_COBERTURA]
    [Z_0SD_C03/ZSD_BO_QUERY_JR_COBERTURA]: SELECT  { [Measures].[4LGOQ5A0UT58OU46P0PO9LN4N] }  ON COLUMNS , NON EMPTY CROSSJOIN( CROSSJOIN( [0MATERIAL__0MATL_TYPE].[LEVEL01].MEMBERS,  { [0SOLD_TO__0REGION].[GT G20] }  ), [0CUST_SALES__0SALES_GRP].[LEVEL01].MEMBERS ) DIMENSION PROPERTIES [0CUST_SALES__0SALES_GRP].[10CUST_SALES__0SALES_GRP], [0MATERIAL__0MATL_TYPE].[50MATERIAL__0MATL_TYPE], [0SOLD_TO__0REGION].[10SOLD_TO__0REGION] ON ROWS FROM [Z_0SD_C03/ZSD_BO_QUERY_JR_COBERTURA]
    [Z_0SD_C03/ZSD_BO_QUERY_JR_COBERTURA]: SELECT  { [Measures].[4LGOQ5A0UT58OU46P0PO9LN4N] }  ON COLUMNS , NON EMPTY  { [0SOLD_TO__0REGION].[GT G20] }  DIMENSION PROPERTIES [0SOLD_TO__0REGION].[10SOLD_TO__0REGION] ON ROWS FROM [Z_0SD_C03/ZSD_BO_QUERY_JR_COBERTURA]
    [Z_0SD_C03/ZSD_BO_QUERY_JR_COBERTURA]: SELECT  { [Measures].[4LGOQ5A0UT58OU46P0PO9LN4N] }  ON COLUMNS , NON EMPTY CROSSJOIN( [0MATERIAL__0MATL_TYPE].[LEVEL01].MEMBERS,  { [0SOLD_TO__0REGION].[GT G20] }  ) DIMENSION PROPERTIES [0MATERIAL__0MATL_TYPE].[50MATERIAL__0MATL_TYPE], [0SOLD_TO__0REGION].[10SOLD_TO__0REGION] ON ROWS FROM [Z_0SD_C03/ZSD_BO_QUERY_JR_COBERTURA]
    [Z_0SD_C03/ZSD_BO_QUERY_JR_COBERTURA]: SELECT  { [Measures].[4LGOQ5A0UT58OU46P0PO9LN4N] }  ON COLUMNS , NON EMPTY CROSSJOIN( [0CALDAY].[LEVEL01].MEMBERS,  { [0SOLD_TO__0REGION].[GT G20] }  ) DIMENSION PROPERTIES [0CALDAY].[20CALDAY], [0SOLD_TO__0REGION].[10SOLD_TO__0REGION] ON ROWS FROM [Z_0SD_C03/ZSD_BO_QUERY_JR_COBERTURA]
    [Z_0SD_C03/ZSD_BO_QUERY_JR_COBERTURA]: SELECT  { [Measures].[4LGOQ5A0UT58OU46P0PO9LN4N] }  ON COLUMNS , NON EMPTY CROSSJOIN( CROSSJOIN( [0CALDAY].[LEVEL01].MEMBERS,  { [0SOLD_TO__0REGION].[GT G20] }  ), [0MATERIAL__0MATL_TYPE].[LEVEL01].MEMBERS ) DIMENSION PROPERTIES [0CALDAY].[20CALDAY], [0MATERIAL__0MATL_TYPE].[50MATERIAL__0MATL_TYPE], [0SOLD_TO__0REGION].[10SOLD_TO__0REGION] ON ROWS FROM [Z_0SD_C03/ZSD_BO_QUERY_JR_COBERTURA]
    How to put it on mdxtest??

Maybe you are looking for

  • IPhone plugged into my GM nav./DVD screen

    I recently bought a new Cacillac Ecsalade, but was unhappy to discover that there was no ipod jack to link my iPhone music to the vehicle. After serching a bit I found a company that sells a very easy to install device (1 minute..really!)that plugs i

  • Urgent Plz:Idoc errors

    In some idocs if the idocs are not posted there would be multiple error message for the particular idoc.Where can we view the multiple error messages?

  • Data Guard Script

    Hi, I would like to know how to write a batch file in windows for dataguard environment. For Example. I woud like to write a batch file to show me the help command. c:\dgmgrl help alter; How do you put these two statements in the batch file in window

  • LOV of INV_ORG is empty while creating location

    Hi, I created new location through Inventory Setup -> Organization -> Location and entered most of the data and moved to tab other details and tried to fetch the information from lov of Inventory Organization but system is giving error that LOV is Em

  • No audio with .mts files in cs6

    I don't have audio with .mts files problem is in bridge too i ve AE CS6 with latest updates. /in cs5 everything was ok/