Exadata Design

Have requirement to migrate 100+ applications (400+ Oracle databases) to Exadata X2-2. Project timeline is 12 months to move everything to Exadata environment.
Current state of databases:
===================
1. All databases are in 11.2.0.2/3
2. Runs mostly standalone( very less RAC)
3. Mostly OLTP applications
4. No consolidations
5. Production systems are having RMAN - level 0 and incremental and archivelog
6. databases run on Linux , SunOS
Questions:
========
1. How many Exadata Full RACKs would be required to buy so that all these databases workload would be accommodated?
2. What is the best way to use these FULL RACK - meaning - should we break down the FULL RACK to Quarter RACK or to HALF RACK ? what is the methodology to follow to design this?
3. Is there any impact of availability of usable storage if a FULL RACK is broken down to HALF or QUATER ? how much % waste of storage and why the waste?
4. Due to cost, is it advisable to use Non-Exadata environment for development/Test databases?
Thanks,

1. How many Exadata Full RACKs would be required to buy so that all these databases workload would be accommodated?
Without information on transaction activity, database size and many other factors here is no way to give a solid answer.  Have you had Oracle or another firm experienced with Exadata migrations evaluate the environment, this would be the best way to get an answer to this question.
2. What is the best way to use these FULL RACK - meaning - should we break down the FULL RACK to Quarter RACK or to HALF RACK ? what is the methodology to follow to design this?
Best way to use Exadata is to utilize the features of the environment, ie. Resource management and instance caging and instance placement.  By evaluating the databases and instances and placing them properly and then applying resource management and instance caging you can have a good level of control of resource utilization.  Breaking a Full rack down to 2 half racks or 1 half rack and 2 quarter racks while possible and has been done typically is done in cases where there is a firm requirements to divide the workload and based on SLAs promise not to share any resources between the environments and must dedicate hardware for those environments.
3. Is there any impact of availability of usable storage if a FULL RACK is broken down to HALF or QUATER ? how much % waste of storage and why the waste?
Yes, if you break the Racks apart physically you will loose some space utilization due to mirror required free in the configuration the amount that you would lose will depend on how many physical Racks you break down into.  In an Exadata configuration the tolerance is to be able to lose an entire cell without impacting availability.  The more you break down the rack the more cells have to be used to ensure that a lost storage cell can still happen therefore usable space is reduce by that amount. 
4. Due to cost, is it advisable to use Non-Exadata environment for development/Test databases?
A lot of organizations for development and QA environments use non-Exadata for these environments, but have a non-prod Exadata that is used for performance and release testing whenever possible.  Ok to have development non-exeadata and feature and QA testing, but to ensure that your releases are fully tested for Exadata it would be recommended to have an Exadata Environment to test in for a Pre-production testing environment.  I have seen this environment be one where the applications utilize the environment in a rotation so in cases where there are multiple racks in production they have a matching rack for non-production and the applications testing cycle rotates through for testing helping reduce costs that way.

Similar Messages

  • Exadata performance

    In our exachk results, there is one item for shared_server.
    our current production environment has shared_server set to 1. shared_server=1.
    Now I got those from exachk:
    Benefit / Impact:
    As an Oracle kernel design decision, shared servers are intended to perform quick transactions and therefore do not issue serial (non PQ) direct reads. Consequently, shared servers do not perform serial (non PQ) Exadata smart scans.
    The impact of verifying that shared servers are not doing serial full table scans is minimal. Modifying the shared server environment to avoid shared server serial full table scans varies by configuration and application behavior, so the impact cannot be estimated here.
    Risk:
    Shared servers doing serial full table scans in an Exadata environment lead to a performance impact due to the loss of Exadata smart scans.
    Action / Repair:
    To verify shared servers are not in use, execute the following SQL query as the "oracle" userid:
    SQL>  select NAME,value from v$parameter where name='shared_servers';
    The expected output is:
    NAME            VALUE
    shared_servers  0
    If the output is not "0", use the following command as the "oracle" userid with properly defined environment variables and check the output for "SHARED" configurations:
    $ORACLE_HOME/bin/lsnrctl service
    If shared servers are confirmed to be present, check for serial full table scans performed by them. If shared servers performing serial full table scans are found, the shared server environment and application behavior should be modified to favor the normal Oracle foreground processes so that serial direct reads and Exadata smart scans can be used.
    Oracle lsnrctl service on current production environments shows all 'Local Server'.
    What should I proceed here?
    Thanks again in advance.

    Thank you all for your help.
    Here is an output of lsnrctl service:
    $ORACLE_HOME/bin/lsnrctl service
    LSNRCTL for Linux: Version 11.2.0.4.0 - Production on 14-JUL-2014 14:15:24
    Copyright (c) 1991, 2013, Oracle.  All rights reserved.
    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=LISTENER)))
    Services Summary...
    Service "+ASM" has 1 instance(s).
      Instance "+ASM2", status READY, has 1 handler(s) for this service...
        Handler(s):
          "DEDICATED" established:1420 refused:0 state:ready
             LOCAL SERVER
    Service "PREME" has 1 instance(s).
      Instance "PREME2", status READY, has 1 handler(s) for this service...
        Handler(s):
          "DEDICATED" established:627130 refused:3 state:ready
             LOCAL SERVER
    Service "PREMEXDB" has 1 instance(s).
      Instance "PREME2", status READY, has 1 handler(s) for this service...
        Handler(s):
          "D000" established:0 refused:0 current:0 max:1022 state:ready
             DISPATCHER <machine: prodremedy, pid: 16823>
             (ADDRESS=(PROTOCOL=tcp)(HOST=prodremedy)(PORT=61323))
    Service "PREME_ALL_USERS" has 1 instance(s).
      Instance "PREME2", status READY, has 1 handler(s) for this service...
        Handler(s):
          "DEDICATED" established:627130 refused:3 state:ready
             LOCAL SERVER
    Service "PREME_TXT_APP" has 1 instance(s).
      Instance "PREME2", status READY, has 1 handler(s) for this service...
        Handler(s):
          "DEDICATED" established:627130 refused:3 state:ready
             LOCAL SERVER
    Service "PREME_CORP_APP" has 1 instance(s).
      Instance "PREME2", status READY, has 1 handler(s) for this service...
        Handler(s):
          "DEDICATED" established:627130 refused:3 state:ready
             LOCAL SERVER
    Service "PREME_DISCO_APP" has 1 instance(s).
      Instance "PREME2", status READY, has 1 handler(s) for this service...
        Handler(s):
          "DEDICATED" established:627130 refused:3 state:ready
             LOCAL SERVER
    Service "PREME_EAST_APP" has 1 instance(s).
      Instance "PREME2", status READY, has 1 handler(s) for this service...
        Handler(s):
          "DEDICATED" established:627130 refused:3 state:ready
             LOCAL SERVER
    Service "PREME_CRM" has 1 instance(s).
      Instance "PREME2", status READY, has 1 handler(s) for this service...
        Handler(s):
          "DEDICATED" established:627130 refused:3 state:ready
             LOCAL SERVER
    Service "PREME_CRM_WR" has 1 instance(s).
      Instance "PREME2", status READY, has 1 handler(s) for this service...
        Handler(s):
          "DEDICATED" established:627130 refused:3 state:ready
             LOCAL SERVER
    Service "PREME_RPT" has 1 instance(s).
      Instance "PREME2", status READY, has 1 handler(s) for this service...
        Handler(s):
          "DEDICATED" established:627130 refused:3 state:ready
             LOCAL SERVER
    Service "PREME_WEST_APP" has 1 instance(s).
      Instance "PREME2", status READY, has 1 handler(s) for this service...
        Handler(s):
          "DEDICATED" established:627130 refused:3 state:ready
             LOCAL SERVER
    The command completed successfully

  • Exadata and Oracle VM

    The other question is whether the subject is supported to install Oracle VM (configured according to the note http://www.oracle.com/technology/tech/virtualization/pdf/ovm-hardpart.pdf) at the nodes of Exadata.
    Thank you very much for the help

    Thank you for your clarification Uwe.
    Indeed I mixed up the two items. Sadly I hoped that there was indeed an Oracle VM for Exadata, which would have been logical / practical for a SME organization moving to the Exastack platform.
    Just to see that I understand the current Exastack Engineered systems offers, I am outlining the salient points:
    1. Exadata - Is designed (marketed) to be an Oracle 11g + RAC exclusively, running under Oracle Linux. CPU licensing is satisfied by physically (hardware) disabling CPU's supplied in the unit.
    2. Exalogic - Is designed (marketed) for the middle-ware / applications, Using Oracle VM + Oracle Linux, and optionally the Oracle SOA Suite middleware Applications, plus third party application, and possible other host O/S supported by Oracle VM.
    If an organization has a web application, based on traditional web server front tier, application server tier (JAVA / JEE), and a data tier (Oracle DB), then what are the choices to move to the Exastack platform:
    1. Move the Web / application server tiers to Exalogic, and the data tier to Exadata, seems the way the engineered platform is marketed, however depending on the application design / usage, this may not justify that level of investment. e.g. If the application is application intensive, then an Exalogic with a relatively small Oracle DB instance is required, alternatively if there is a DB intesive application, then an Exadata with a relatively small web / application server tier is required. Hence perhaps would it not be logical to offer such a hybrid "Exastack" model for SME starting out, then when volumes demand it scaling horizontally:
    e.g. Get Exadata for the Oracle 11g + RAC, and reserve 2 or more processing units (X3-2) for the middle-ware (Oracle VM + Oracle Linux) in the same physical rack, when volumes increase, a dedicated Exalogic is added and the middle-ware applications migrated to the Exalogic platform, freeing up (scaling up) the Exadata platform.
    In addition, given the licencing restriction, can Exadata be partitioned for a production database instance/s and development instance/s on the same physical rack? Note; this might be somewhat worked around, since a HA demanding business may have a DR site with a second Exadata, which can host the development DB instances when in normal operations mode.
    I feel this is more of a marketing rather than technical issue here, but if there is some flexibility in the Exastack configuration, would simplify and lower the affordability bar for SME / start-ups.
    Appreciate any views.
    Best regards,
    Jesmond

  • Hadoop Implementation in Oracle Database without Exadata/Oracle hardware

    All,
    Please does anyone know if Oracle plans to implement hadoop without Oracle supplied hardware (Exadata and the like). The reason I'm asking is that at my company, we are now approaching a transation rate of 2000 t/s (transactions per second) and developers are beginning to complain about how slow Oracle is. They've gone ahead to move parts of the database to Cassandra, Redis and DynamoDB (and they're currently experimenting with MungoDB). I understand that some of these DBs operate a key/value system and so it's fast to retrieve data from them. I'm just wondering if Oracle plans to stem this tide by implementing Hadoop without packaging it with a hardware system so as to make it more affordable to implement as there are so many open source DB springing up theses days.
    Please any useful information would be highly appreciated. (And if there's anyone out there close enough to Oracle, please, let them know these threats are real and my Oracle database is actually vanishing under me).
    Thanks in advance.
    Baffy

    Adam Martin wrote:
    I don't have any more insight into Oracle's future with Hadoop other than what they have said in their statements of direction.
    However, it struck me that you consider 2000 tps (or so) to be some kind of threshold above which Oracle technology will have trouble keeping up.
    developers are beginning to complain about how slow Oracle isNo. It's not Oracle that is slow. Actually they are beginning to complain about how slow that particular system is performing. The application may be slow, and the database may be the bottleneck. But this does not mean that moving away from Oracle is necessarily the right solution.
    Granted, there are excellent innovations in database technology right now, especially in the arena of massively parallel database systems. However, I would want to take a long look at the system design, from hardware to database design to application code before concluding that the dbms needs to change. There could be storage i/o sub system issues or application server issues or network problems too.
    I am sorry if I am just regurgitating things you probably already know, and likely have analyzed to death with your system. But Oracle can scale well beyond 2000 tps while still serving up good response times. And it is also important to note that moving to these other database technologies sometimes comes with the need to sacrifice some part of the ACID (atomicity, consistency, isolation, durability) properties of transaction management present in a typical RDBMS like Oracle.I have told this story in a few other threads on this forum over the years, but it bears repeating for the OP.
    Several years ago (more than I like to think about now) we were evaluating some commercial software for a specialized app in the industry in which I was working at the time. The sales team for one particular vendor kept talking about SQL Sever. I reminded him we were an Oracle shop and asked if their product would run on Oracle. His response was something to the effect that "It will, but we recommend SQL Server because we've found that Oracle bogs down with more than five concurrent connections." The meeting didn't last much longer and I made sure that vendor didn't make the short list. And the lesson is that even commercial software developers (or perhaps especially commecial software developers?) often don't have a clue ...
    I'd guess the OP's developers also don't have a clue. Perhaps they are writing their apps to Oracle using what they learned as 'best practice' in SQL Server. Or perhaps they are even more clueless than that and the need to design for scalability and performance never even enters their minds.

  • Will Oracle UK Support their Exadata running 10g instead of 11g ?

    I have ORDM installed on an Exadata at present using 11g.
    ORDM is certified for 11g but is as I understand it, it is really a 10g design.
    The Exadata is an "11g box" however it would be mighty convenient to be able to run 10g on it instead with ORDM and another system schema in the database instance.
    Does anyone know if oracle will support this ? (as I believe they won't support a 10g and 11g installation on the one Exadata.)
    Cheers,
    Matt.

    I am aware this isn't an official Oracle support site, hence saying "Does anyone know if oracle will support this ? "
    I was just hoping for a quick win from someone who might have done this before as I asked the same question to Oracle on Tuesday and they are yet to tell me.
    Never mind.
    Matt.

  • Exadata internal network connectivity query

    Dear Experts,
    We are planning to migration database from old platform into Exadata, we are reviewing the network requirements and will estimate the network ports requirements for connecting Exadata to our existing Production environment.
    We would like to know based on X4-2 model, there are 4x 1/10Gb Ethernet ports, 2 x 10Gb Ethernet Ports (optical) and 2 x QDR (40Gb/s):
    1.  How does this ports be used?
    2.  Where are the RAC heartbeats ports can be used inside Exadata?
    3.  How many network ports are required to connecting to external network?
    4.  How does RAC public network ports form teaming for resilience?

    As John said - wrong forum. There is nothing in your question that relates to SQL or PL/SQL.
    However, am an Infiniband user for over 10 years and love to comment on the technology.
    The QDR (Quad Data rate) ports are 40Gb/s Infiniband ports. Infiniband is a fabric layer and differs from Ethernet. However, you can run IP over Infiniband (called IPoIB). Infiniband reduces the complexity of the ISO stack and IP can run faster over IB (Infiniband). Oracle many years ago designed a new IB protocol called RDS (Reliable Datagram Sockets) - https://cw.infinibandta.org/document/dl/7227
    RDS can be used in Oracle RAC instead of UDP (requires a rdbms relink). It is 50% faster than UDP and with half the latency.
    Exadata uses IB as Interconnect and RDS as Interconnect protocol.
    IB also supports other protocols like RDMA (Remote Direct Memory Access), ISER and SRP (scsi storage protocols), and so on. We are using ISER for example for our self built storage layer (3 storage servers with 60 TB capacity) for Oracle RAC, I believe Exadata also runs its I/O fabric layer over Infiniband.
    When you look at the HPC environment, the http://top500.org stats are quite meaningful as these describes the 500 fastest super computer clusters on this planet. Here are the latest Interconnect stats:
    When we first bought into Infiniband (amid a lot of flack from corporate architects), IB had a mere 4% market share at the time in the top500 environment. I feel that the above stats (Nov 2104), clearly shows who was right.
    Infiniband is a great technology - despite what some vendors and so-called self proclaimed experts and architects say.

  • Can customers rebuild an Exadata machine with the latest stack versions?

    There’s a possibility that we’ll be purchase two new Exadata machines (X3) in the near future. I'd be getting very excited if I wasn't already entirely swamped :)
    If it happens, we’ll be asking Oracle to install the latest and greatest of the software stack when they arrive on-site with our new toys. Currently, this means:
    <i>OEL: 5.7 (with latest kernel)
    ESS: 11.2.3.2.1 (write-back FC, mmmmm!)
    RDBMS/GI: 11.2.0.3.17</i>
    Our current Production database is on a V2 machine and has the following versions of the stack:
    <i>OEL 5.5
    ESS 11.2.2.3.2
    RDBMS/GI 11.2.0.2 BP7</i>
    We are hoping, once the dust settles, that we can re-purpose our existing V2 machine as a Development environment. However, in order for that to be of any use, we need the software stack to match what will be running in Production on the X3s.
    As far as I understand, the upgrade path is as follows (as per 888828.1)
    <i>Upgrade the O/S to OEL 5.7 and the latest kernel on storage cells and comp nodes
    Upgrade the firmware on the IB switch to 1.3.3-2 (which we already have)
    Upgrade the Exadata Storage Server on the storage cells and comp nodes to 11.2.3.2.1
    Install the 11.2.0.3.17 GI and RDBMS binaries
    Upgrade ASM from 11.2.0.2 to 11.2.0.3.17
    Install the 11.2.0.3.17 RDBMS binaries
    Make/move/restore/copy Development onto the newly-upgraded V2 machine.</i>
    I’m wondering whether it’s better for us to upgrade the V2 machine from our current versions of the stack to the latest or whether it’s better to attempt a rebuild?
    As a customer, are we able to rebuild the stack ourselves with the new software or do we have to have Oracle come in and go through their installation process (we are putting a different version of the stack on than we presently have)?
    Mark

    frits hoogland wrote:
    I don't understand the answers.
    A V2 Exadata system (and up, X2, X3) is full supported up to the newest Exadata software releases, so you just can upgrade. Of course you need to check with MOS 888828.1 what path to take (not all software might be upgradable to the latest release in one go). Need need to puzzle, just upgrade.I'm fairly sure that we would be able to upgrade - in fact, when we weren't entertaining a hardware upgrade earlier in the year, I had planned out a upgrade path from our current versions to what was the current stack before the FlashCache became write-able.
    We didn't have much of a choice at this point because the V2 was planned to be our Production environment for the foreseeable. Our upgrade was possible, but would have been relatively cumbersome as we would have had to upgrade the O/S, the ESS on cells/nodes, the GI and then the RDBMS in chunks to satisfy the various pre-requisites.
    My question was whether it was possible/better/cleaner to simply rebuild the whole box with the latest software stack instead of upgrading now that the V2 environment is likely to be designated for Development if we get the new hardware and there isn't the associated pressure of it being a Production box.
    >
    If you want to change the space ratio between DATA and RECO, the easy path is to delete all the databases, remove the data and reco diskgroups, remove the grid disks on the cells, and create them again, and create the diskgroups on top of it. This also can be done online by dropping the griddisks per cell/storage server in ASM, recreating them with different sizes, and get them in ASM again.I believe that Tycho said he had to choose between upgrading the stack AND change the space ratio between his diskgroups OR just rebuild the system from scratch: and he chose to rebuild.

  • Can not see two fields in Crystal 2008 Developer explorer/designer view

    I am currently developing crystal 2008 reports against the salesforce.com database using version 12.0.0.683 CR Developer Full version. I am using an updated driver that was provided in July or Aug 08 in order to view self referencing fields. The problem is that when I try and report against one of the tables (lead history) I cannot view two of the fields (New Value and Old Value)? I can see these two field (New Value and Old Value) in the database expert as the last two fields in the actual table, but the two fields are missing when I go into the explorer/designer view. In Salesforce, these two fields can not be filtered on, but I can export all the values in this table using Salesforece Apex data loader.

    Please re-post if this is still an issue to the OnDemand Forum or purchase a case and have a dedicated support engineer work with you directly

  • MSI Forum Wallpaper Design Contest

    Prize:
     The winner will be rewarded with a soon to be launched Z87-GD65 GAMING motherboard.
    (check it out at http://game.msi.com)
    How:
     Design and submit a wallpaper in this Forum topic.
     (If you don't have an MSI Forum account you need to register first here, please read how to add pictures to forum posts below)
    Contest Rules:
    -   Submitted wallpapers must be at least 1920x1080.
        (Preferably JPG or PNG formats, 16:9 or 16:10 ratio, max 2560x1600)
    -   You can post as many unique wallpapers as you like until the contest ends at June 28th 18:00h (GMT).
        (when we will lock this topic).
    -   MSI will select and announce the winner, one week after the contest has ended.
    -   The winner will be contacted by PM and registered email.
        (if we can’t reach the winner within one week, we will select a new winner)
        (so please check your PMs/Email regularly when the contest has ended)
    -   MSI Forum Rules and MSI Forum Terms also apply to the Wallpaper entries, so keep it clean.
        (no obscenity/vulgarity/copyrighted material).
        MSI reserves the right to remove entries when we think the entry is inappropriate.
    -   By submitting your wallpaper entry to this forum you give MSI the right to use/upload/print
        any wallpaper entries for marketing activities during or after this contest.
    Hints:
    -   Upload your wallpaper(s) to an image hoster  (e.g. photobucket, imageshack, etc) or other online location with unrestricted access.
    -   Please use the following tags to add your uploaded wallpaper to your forum post.
       Code: [Select]
    [img width=760]http://url-to-your-wallpaper-goes-here.jpg[/img]    (“width=760” added for proper forum scaling, does not affect original image). More info here.
        (Please Note if you also want to use the URL tags you need at least 2 posts in the forum (=anti spam measure))
    -   Use high quality pictures. As a starter you can use any pictures/logos from http://media.msi.com or
        other MSI websites (http://www.msi.com, http://game.msi.com, http://oc.msi.com)
    -   Increase your chance of winning by:
          1.) Using the MSI Dragon image
          2.) Using the MSI Gaming theme or the MSI OC theme
          3.) Get a lot of users to share your Forum post on Facebook
               (everyone can share your Forum post on Facebook by clicking the button in your forum post).
    Good luck everyone.

    Hello everyone,
    Here is my submission for the contest with 7 wallpapers (I know it's a lot... but I was inspired so... ^^ )
    For the 2 first ones, I wanted some classic good looking wallpapers with the MSI Dragon being the center of it, with some high quality effects. So here they are, a bright version and a dark one.
    Hope you guys like it.
    For the 5 next ones, I tried something really different. More in an advertising campaign way. Some minimalist looking wallpapers, trying to focus on simple things, no shadow effect. Art on it's purest form, simple but well thought.
    I asked myself why people would chose MSI instead of another brand, so I figured the people who already know that. So I first started with the "Gamers know why" because it's obvious, then went to an iconic e-sport legend ("Legends Know Why") Mr. Patrik 'cArn' Sättermon, I think he and the Fnatic team clearly represent why people should chose MSI. So I used the Fnatic team for the third one "Winners know why", they are clearly one of the best multigaming team of the world and are sponsored by MSI for quite a few years now. (The picture is from the IEM 5 For the 4th one, I noticed that I start to see more and more MSI laptops around me, at college etc, so I thought that "The World knows why". And for the last one I tried to represent the spirit around the MSI Dragon and the whole Gaming part of MSI, that feeling of POWER. So I tried to make that last one a combination of the 2 styles. Representing the Power coming out of the Dragon... in a minimalist way. So here's "Unleash the Dragon". I hope I succeed. And I really hope you like my work. It took me several hours to complete that "campaign" and I'm really proud of it. I used the "#" system in it because as I made it as an advertising campaign, I thought it would be nice if everyone could share that spirit easily.
    The picture of cArn and the one of the Fnatic team are from the internet, the rest comes from my personal pictures (textures and such) and of course everything is homemade.
    Cheers.

  • IF and ABS condition statement in BEX query designer

    Hi,
    I would like to ask the best way for me to produce an acceptable result from Excel IF and ABS Condition statement.
    The condition statement that I have on my Excel file is
    =IF((A2-B2)>0,ABS(A2-B2),0)
    I'm trying multiple times to reproduce this in BEX Query designer, unfortunately I'm getting a bad result or unacceptable formula.
    Anyone who could help me with my issue?
    Thanks,
    Arnold

    Hi Arnold,
    Thank you,
    Nanda

  • Text value is not getting displayed in Query designer !!

    Dear experts..,
    i have created a new query in query designer using my info provider and then selected one field in default value and then trying to restrict that particular field while selecting the restriction in query designer am getting the exact text value but after generating the report instead of text value , key value is getting displayed....so how can i get text instead of key value??
    please help me friends....
    i have posted in OSS mesage also...i got a reply like...even i didnt understand his reply too...what he is trying to say?
    whether can i get text display or not???
    can any one help me in this regard???
    SAP Reply----
    Hello kumar,
    After another analysis I have to inform you about general concept of
    "compounded characteristics".
    A compounded characteristic bounds two characteristics. The technical
    name is generated by both technical names of the two characteristics
    combined by two underlines "__".
    An individual text is only available for one single combination of both
    characteristics.
    Example:
    =======
    Compounded characteristic "Famous family name" is a combination of
    characteristic "COUNTRY" & "ETHNIC". Technical name: COUNTRY__ETHNIC
    Values for Country: USA, Australia
    Values for Ethnic: Asian, Latino
    Possible value combinations with individual text:
    USA & Asian; text: "Ling"
    USA & Latino; text: "Sanchez"
    Australia & Asian; text: "Chu"
    Australia & Latino; text: "Garcia"
    (Keep in mind the individual text only valid for the specific
    combination.)
    In analogy to the issue that you reported, you want to restrict this
    compounded characteristic. In the window where you select the restrictedvalue (called Selector) you'll see on the left hand side all available
    combinations of the characters with an individual text.
    You select family name "Chu" and drag'n'drop it to the right side.
    Actually you can only restrict the right compounded characteristic. In
    our example you would restrict on characteristic "ETHNIC" with value
    "Asian". (When you switch on technical names this comes more clear). Thetext "Chu" is displayed in the context of Selector because you selected
    value combination Australia & Asian. But in the end it's just a
    placeholder(!) for any combination of characteristic "ETHNIC" and value
    #Asian#; in our example it could be USA & Asian "Ling" or Australia &
    Asian "Chu").
    By leaving the Selector the individual text is gone because now the
    context is lost between the two characteristics. You have just a
    restriction on characteristic "ETHNIC" with value "Asian". An individualtext can't be displayed because the compounded characteristic is not
    specified for this restriction.
    You're right, it is confusing when "loosing" the text of a restriction.
    But accoring to the concepts of the compounded characteristics it
    is a correct behavior.

    Hi anandkumar,
    I belive this issue can be resolved by changing the  Query proprties for the perticular field.
    Kindly check the Field proerties in query designer and ensure that Text is enabled ather than Key.
    __Field property check up:__Go to query designer->click onn the field-> Right hand side in properties click on display tab-> select Text in drop down menu of Display as tab.
    FURTHER CHECK UP: check the master data avaiulability for the perticular info object, if masterdata is not available, do the text data for txt data availability in report level.
    Hope this helps you!!
    Best Regards,
    Maruthi

  • Master data is not getting displayed in the Query Designer

    Hi,
    I have a DSO in which I have an InfoObject called Emp No. in the Data Field.
    The Emp.No is being maintained as master with (Emp Name, Address, Telephone No, DOB) as attributes.
    I have loaded the data in the Emp. No. master. Then tried loading the transaction data in DSO.
    The Emp.No is there in the DSO active data, but in the query designer its not getting displayed.
    Hope its clear.
    Please help.
    Thanks

    Hi,
    I have brought the Emp. No. in the Key Field and also have activate the master data again.
    Yet my Query Designer doesn't have the Emp. No.
    I have done a full load for both Master and Transaction.
    Please advice me what the other alternative.
    Thanks

  • Multiprovider design -problem

    I am just creating the multiprovider on Custom Billing cube and Custom DSO .
    In my billing cube , I have Billing document assigned to line item dimension
    and Line item is also in another in another line item diemsion .
    My DSO have Billing document number , Item number and Partner function as primary key .
    Could you please let me know how should be my MP design ?
    How to create a dimension in Multiprovider ?
    Can I assign the Billing document from cube and Billing document from DSO to the same diemsion ?
    How to figure out the  Partner function Characterstic of DSO which is primary key of DSO but not having this fields in billing cube . ?
    There are 2 other data fields available in DSO which is not there in Billing cube , Do i hv to create a new dimension for them ?
    Please help me soon .
    Regards

    Can I assign the Billing document from cube and Billing document from DSO to the same diemsion ?
    You can create a single dimension for this, in that you can use Billing document and right click assign/identify from the both DSO/Cube.
    How to figure out the Partner function Characterstic of DSO which is primary key of DSO but not having this fields in billing cube . ?
    Create separate dimension for these Partner function Characterisitcs
    There are 2 other data fields available in DSO which is not there in Billing cube , Do i hv to create a new dimension for them ?
    If those data fields related to Billing Document, then you can add these two fields to fisrt dimension (Billing Document Dimension).
    Hope you get some idea.
    Veerendra.

  • Error while trying to Open and Delete a BEx Query in Query Designer.

    Hi Experts,
    I have been facing this issue with BEx Query in Designer mode for a couple of days now in Development Environment in BI 7.0.
    I would like to Delete an unwanted BEx Query because I need to give it's name to another BEx Query.
    Whenever I try to Open this Query in BEx Query Designer to Delete, it gives out an error saying:
    "An error occurred while communicating with the BI server.
    As a result of this error, the system has been
    disconnected from the BI server.
    Detailed Description:
    STOP: Program error is class SAPMSSY1 method : UNCAUGHT_EXCEPTION
    STOP: System eror in program CL_RSR and form GET_CHANMID-01 - (see long text)"
    Details text is as below:
    Diagnosis
    This internal error is an intended termination resulting from a program state that is not permitted.
    Procedure
    Analyze the situation and inform SAP.
    If the termination occurred when you executed a query or Web template, or during interaction in the planning modeler, and if you can reproduce this termination, record a trace (transaction RSTT).
    For more information about recording a trace, see the documentation for the trace tool environment as well as SAP Note 899572.
    I have tried with SAP Note 899572, but still it's the same.
    Has anyone faced similar issue? Could you please let me know if you have better ideas or solution?
    Your time is much appreciated.
    Thanks,
    Chandu

    Arun - Thanks a million. You saved me Nine. It worked.
    Could you also please tell me how to rename (Technical) a Query or is it still "Save the Query As" the only way...?
    Points already assigned.
    Thanks,
    Chandu

  • Dunning Letter Design & Aging report

    Hello Experts,
    I have a few doubts in B1 Sales AR -> Dunning and Aging Reports.
    1. How do we incorporate our own letter format for dunning letters? I looked for PLD's, but nothing was available in the print preview screen.
    2. When we execute Customer Receivables Aging report, what is the significance of the column "Future Remit"? In our B1 instance, the receivables that are due 60 or 90 days from current date are not appearing in their respective columns.
    Please advise on how to proceed
    With Regards,
    PS: We are running B1 2005-B / MS-SQL 2005

    Hi Kaatss,
    You should be able to find the Dunning letter formats by clicking on the layout designer button and then from the windows selecting the dunning letter formats from the 'Choose Document Type'.  They go by the ID of DUN0... DUN09.
    As for the columns the 60 and 90 columns show the invoices (values) that are overdue by 60 and 90 days respectively.
    The Future Remit column shows invoices that are still within the due date.  So if you raise two invoice with 60 and 90 day payment terms, but they are still within their due date, their values will appear in the future remit column.
    I hope this helps.  I use 2005A so I hope there are no differences.
    Damian

Maybe you are looking for