Best practices for Refreshing Unv Structure for SAP based universes

This may sound basic and I apologize for asking this, but the refresh structure in SAP based universes is somewhat different from noraml universes.
Here is what i'd like to do:
- Hide all the L00 objects.
- Rename all the L01 objects and move them to a new Class.
- Change some of the detail (attribute) objects to dimension objects.
- Hide the format and Unit for key figures.
- Hide all of the classes/subclasses  that get automatically generated when a SAP based universe is created.
I have noticed that when I do the above and refresh the universe, it assumes that all these objects have gone missing from the original classes and adds them back to the universe.
I also want to make sure that if the select of an object gets updated and the object is re-named, then it should automatically pick up the change.
Lastly, I have some reports which were built prior to this renaming. I want to make sure that the reports do not break.
Thanks,
Kashif

Hi,
This thread is really old. 
Yes it was a common problem back in the earlier XI 3.x days . a lot of bugs in this area were eliminated by the time of XI 3.1 SP03 FP3.x   -  you don't quote your version.
Actually, you need to be aware that a refresh structure is often not needed, and can corrupt the olap universe . Pls check out Note 1278216 - What are the best practices for OLAP Universe Change Management when using SAP Integration Kit?
in essence :
Only use 'Refresh Structure' functionality If: 
- A new Object (Dimension/Characteristic) has been added to the BEx query (Rows/Columns/Free Characteristics)
- A new Variable Restriction has been added to the Bex query Filters
Do not use 'Refresh Structure' functionality after:
- Having modified a STRUCTURE in BEx. i.e. 'Detail view of Formula' or 'Details of Selection', or changing the General Description of structure members.
- Doing manual actions on objects/classes in the OLAP Universe like:  Move ; Cut/Paste ; Drag/Drop ; Hide ; Delete.
(because these workflow can lead to corruption)
regards,
H

Similar Messages

  • What are the best practices to migrate VPN users for Inter forest mgration?

    What are the best practices to migrate VPN users for Inter forest mgration?

    It depends on a various factors. There is no "generic" solution or best practice recommendation. Which migration tool are you planning to use?
    Quest (QMM) has a VPN migration solution/tool.
    ADMT - you can develop your own service based solution if required. I believe it was mentioned in my blog post.
    Santhosh Sivarajan | Houston, TX | www.sivarajan.com
    ITIL,MCITP,MCTS,MCSE (W2K3/W2K/NT4),MCSA(W2K3/W2K/MSG),Network+,CCNA
    Windows Server 2012 Book - Migrating from 2008 to Windows Server 2012
    Blogs: Blogs
    Twitter: Twitter
    LinkedIn: LinkedIn
    Facebook: Facebook
    Microsoft Virtual Academy:
    Microsoft Virtual Academy
    This posting is provided AS IS with no warranties, and confers no rights.

  • Best practice standard User Acess Test for WIN2012 AD

    What is the Best practice standard User Acess Test  for WIN2012 AD

    Hello,
    as before, add a computer to the domain and log on with a domain user account to the computer.
    You should be able from the client machine to open the sharedfolders on the DCseither with:
    \\DCName\sysvol
    \\DCName\netlogonor \\NetBiosDomainName\sysvol
    \\NetBiosDomainName\netlogon
    Best regards
    Meinolf Weber
    MVP, MCP, MCTS
    Microsoft MVP - Directory Services
    My Blog: http://blogs.msmvps.com/MWeber
    Disclaimer: This posting is provided AS IS with no warranties or guarantees and confers no rights.
    Twitter:  

  • Quick question regarding best practice and dedicating NIC's for traffic seperation.

    Hi all,
    I have a quick question regarding best practice and dedicating NIC's for traffic seperation for FT, NFS, ISCSI, VM traffic etc.  I get that its best practice to try and separate traffic where you can and especially for things like FT however I just wondered if there was a preferred method to achieving this.  What I mean is ...
    -     Is it OK to have everything on one switch but set each respective portgroup to having a primary and failover NIC i.e FT, ISCSI and all the others failover (this would sort of give you a backup in situations where you have limited physical NICs.
    -    Or should I always aim to separate things entirely with their own respective NICs and their own respective switches?
    During the VCAP exam for example (not knowing in advance how many physical NIC's will be available to me) how would I know which stuff I should segregate on its own separate switch?  Is there some sort of ranking order of priority /importance?  FT for example I would rather not stick on its own dedicated switch if I could only afford to give it a single NICs since this to me seems like a failover risk.

    I know the answer to this probably depends on however many physical NICs you have at your disposal however I wondered if there are any golden 100% rules for example FT must absolutely be on its own switch with its own NICs even at the expence of reduced resiliency should the absolute worst happen?  Obviously I know its also best practice to seperate NICs by vender and hosts by chassis and switch etc 

  • [XI 3.1] BEST PRACTICE method of Oracle connection for RPTs on Linux

    Business Objects XI (3.1) - SP3.
    Running on Red Hat Enterprise Linux OS.
    7,000+ Crystal Reports 2008 *.rpt objects ONLY (No Universe / No WebI).
    All reports connecting to Oracle 10g databases.
    ==================
    In the past, all of this infrastructure was running on Windows Server OS and providing the database access via a Named ODBC connection (eg. "APP_DATA".)
    This made it easy to manage as all the Report Developers had a standard System DSN called "APP_DATA" which was the same as the System DSN name on all of our DEV, TEST/UAT, and PROD servers for Business Objects.
    When we wanted to move/promote a *.rpt file from DEV to PROD we did not have to change any "Database Connection" info as it was all taken care of by pointing the System DSN called "APP_DATA" a a different physical Oracle server at the ODBC level.
    Now, that hardware is moving from Windows OS to Red Hat Linux and we are trying to determine the Best Practices (and Pros/Cons) of using one of the three methods below to access the Oracle database for our *.rpts....
    1.) Oracle Native connection
    2.) ODBC connection
    3.) JDBC connection
    Here's what we have determined so far -
    1a.) Oracle Native connection should be the most efficient method of passing SQL-query to the DB with the fewest issues and best speed [PRO]
    1b.) Oracle Native connection may not be supported on Linux - http://www.forumtopics.com/busobj/viewtopic.php?t=118770&view=previous&sid=9cca754b468fc67888ab2553c0fbe448 [CON]
    1c.) Using Oracle Native would require special-handling on the *.rpts at either the source-file or the CMC level to change them from DEV -> TEST -> PROD connection. This would result in a lot more Developer / Admin overhead than they are currently used to. [CON]
    2a.) A 3rd-Party Linux ODBC option may be available from EasySoft - http://www.easysoft.com/products/data_access/odbc_oracle_driver/index.html - which would allow us to use a similar Developer / Admin overhead to what we are used to. [PRO]
    2b.) Adding a 3rd-Party Vendor into the mix may lead to support issues is we have problems with results or speeds of our queries. [CON]
    3a.) JDBC appears to be the "defacto standard" when running Oracle SQL queries from Linux. [PRO]
    3b.) There may be issues with results or speeds of our queries when using JDBC. [CON]
    3c.) Using JDBC requires the explicit-IP of the Oracle server to be defined for each connection. This would require special-handling on the *.rpts at either the source-file (and NOT the CMC level) to change them from DEV -> TEST -> PROD connection. This would result in a lot more Developer / Admin overhead than they are currently used to. [CON]
    ==================
    We would appreciate some advice from anyone who has been down this road before.
    What were your Best Practices?
    What can you add to the Pros and Cons listed above?
    How do we find the "sweet spot" between quality/performance/speed of reports and easy-overhead for the Admins and Developers?
    As always, thanks in advance for your comments.

    Hi,
    I just saw this article and I would like to add some infos.
    First you can quite easely reproduce the same way of working with the odbc entries by playing with the oracle name resolution on the server. By changing some files (sqlnet, tnsnames.ora,..) you can define a different oracle server for a specific name that will be the same accross all environments.
    Database name will be resolved differently regarding to the environment and therefore will access a different database.
    Second option is the possibility to change the connection in .rpt files by an automated way like the schedule manager. This tool is a additional web application to deploy that can change the connection settings of rpt reports on thousands of reports in a few clicks. you can find it here :
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/80af7965-8bdf-2b10-fa94-bb21833f3db8
    The last option is to do it with a small sdk script, for this purpose, a few lines of codes can change all the reports in a row.
    After some implementations on linux to oracle database I would prefer also the native connection. ODBC and JDBC are deprecated ways to connect to database. You can use DATADIRECT connectors that are quite good but for volumes you will see the difference.

  • OBIEE Best Practice Data Model/Repository Design for Objectives/Targets

    Hello World!
    We are faced with a design question that has become somewhat difficult and we need some help. We want to be able to compare side-by-side actual measures with their corresponding objectives/targets. Sounds simple. But, our objectives are static (not able to be aggregated) with multi-dimensionality and multi-levels. We need some best practice tips on how to design our data model and repository properly so that we can see the objective/target for a measure regardless of the dimensions that are used in the criteria and regardless of the level.
    Here is some more details:
    Example of existing objective table.
    Dimension1
    Dimension2
    Dimension3
    Obj1
    Obj2
    Quarter
    NULL
    NULL
    NULL
    .99
    1.8
    1Q13
    DIM1VAL1
    NULL
    NULL
    .99
    2.4
    1Q13
    DIM1VAL1
    DIM2VAL1
    NULL
    .98
    2.41
    1Q13
    DIM1VAL1
    DIM2VAL1
    DIM3VAL1
    .97
    2.3
    1Q13
    DIM1VAL1
    NULL
    DIM3VAL1
    .96
    1.9
    1Q13
    NULL
    DIM2VAL1
    NULL
    .97
    2.2
    1Q13
    NULL
    DIM2VAL1
    DIM3VAL1
    .95
    2.0
    1Q13
    NULL
    NULL
    DIM3VAL1
    .94
    3.1
    1Q13
    - Right now we have quarterly objectives set using 3 different dimensions. So, if an author were to add one or more (or zero) dimensions to their criteria for a given measure they could get back a different objective. They could add Dimension1 and get 99%. They could add Dimension1 and Dimension2 and get 98%. They could add all three dimensions and get 97%. They could add zero dimensions (highest grain) and get 99%. Using our existing structure if we were to add a new dimension to the mix the possible combinations would grow dramatically. (Not flexible)
    - We would like our final solution to be flexible enough so that we could view objectives with altogether different dimensions and possibly get different objectives.
    - We currently have 3 fact tables with 3+ conformed dimension tables and a few unique dimension tables.
    Could anyone share a similar situation where you have implemented a data model structure with the proper repository joins to handle showing side-by-side objectives/targets where the objectives were static and could be displayed at differing levels with flexible dimensions as described?
    Any help would be greatly appreciated.

    hi..yes this suggestion is nice...first configure the sensors(activity or variable) ..then configure the sensor action as a JMS Topic which will in turn insert the data into a DB..Or when u configure the sensor action as a DB..then the data goes to Oracle Reports schema..if there is any chance of altering the DB..i mean if there is any chance by changing config files so that the data doesnt go to that Reports schema and goes to a custom schema created by any User....i dont know if it can b done...my problem is wen i m configuring the jms Topic for sensor actions..i see blank data coming..for sm reason or the other the data is not getting posted ...i have used a esb ..a routing service based on the schema which i am monitoring...can any1 help?

  • Best practice in Infoprovider & Query design for access by BO Universe

    Hello Experts,
    Are there any best practices identified by practitioners or suggested by SAP for development of Infoprovider and queries for access by BO Universe.
    Best practices should be from the prospective of performance, design simplicity, adaptability to change etc.
    Appreciate your help.
    Regards,
    Pritesh.
    Edited by: pritesh prakash on Jul 19, 2010 10:51 AM

    Thanks Suresh.
    My project plan is to build Infocubes & queries which will be then used to build Universe upon it. Thus I am looking for do's & dont's while designing infocubes & queries such that there wont be any issues(performance or other) when accessed by Universe built on it.
    Hope I have made it more clear now.
    Regards,
    Pritesh.

  • Best Practice while configuring Traffic Manager for Azure Website

    Hi Team,
    I want to understand What is the best practice while we configure traffic manager for Azure website.
    To give you the base, Here let me explain my requirement. I have one website which 40% target audiences would be East US, while  40% would be UK and rest 20% would be from Asia-pacific.
    Now, What I want is Failover + Performance based Traffic Manager Configuration.
    My thinking:
    1) we need to create 1 website with 2 instances in each region (east us, east asia, west us for an example). so, total 3 deployment of website. (give region based url for the website)
    2) create traffic manager based on performance and add 3 of those instances. that would become website-tmonperformance
    3) create traffic manager based on failover and add 3 of those instances. that would become website-tmonfailover
    4) create traffic manager and ?? don't know the criteria but add both above traffic manager here and take your final url for end user.
    I am not sure (1) this may be the right approach or not (2) if this is right, in the 4th step which criteria we should select while creating final traffic manager round-robin/ performance/ failover?
    after all these if use try to access site from US.. traffic manager will divert that to US Data-Centre or it will wait for failover and till that it will be served from east-asia if in configuration, east-asia is my 1st instance?
    Regards, Brijesh Shah

    Hi Jonathan,
    Thanks for your quick reply. actually question is bit different. Let me explain you different way.
    I was asking for recommendation from Azure Traffic Manager team. whether my understanding is correct or not.We want Performance with Failover.
    So, One azure website we have: take an example todoapp. I deployed that in 3 different region. now, I want to have performance based routing as well as failover based routing. but obviously I can't give two URL to my end user. so, at the top of that I will
    require one more traffic manager. So,
    step 1: I will create one traffic manager with performance criteria named: TMForPerformance.trafficmanager.com where I will add all those 3 instances (all are from different region so, it want create any issue.)
    step 2: I will create one more traffic manager with failover criteria named: TMForFailover.trafficmanager.com where I will add all those 3 instances (all are from different region so, it want create any issue.)
    step 3: I will create one final traffic manager with performance criteria named: todoapp.trafficmanager.com where I will add these two traffic manager instead of 3 different region's website.
    Question 1) Is it correct structure if we want to achieve Performance with Failover or Is there any better solution?
    Question 2) in step 3, what criteria we should select? performance/ round robin/ failover
    Regards, Brijesh Shah

  • Toy Store Best Practice: How to implement 'cancel' for register user page

    Let's say user wants to register or (even edit) his/her account. But when forwards to register(edit) account page, he/she decides not to do it. I would like to implement 'cancel' button and return the user whereever he/she was before comming to this page.
    What is the best practice?
    Even worse, if user gets to this page and never saves the entry. The model would be dirty. And then let's say that user wants to commit something in DB, it may commit incorrect(blank) entry for created row. What I am up to, is what is the best practice to keep track if the model gets dirty and delete invalid rows in general?

    You might want to read this thread:
    Cancel operation followed by refresh raises JBO-33035
    (very similar discussion I was having with Steve)

  • Best Practice: A J2EE Blue-Print for a Typical Web App

    Consider a typical synchronous Struts-based Web application which does a simple DB search and post. What are some of the main patterns and components that should be used if following the �industry best practices�
    Does the following flow seem accurate?
    Strust Action creates a TransferObject , and passes it to a Business Delegate. Delegate finds the appropriate BusinessObject, the Business Object uses the Data Access Object�.the CRUD operation happens and the result is sent back to the Action in the same TransferObject.
    Which one of these components need an interface?
    What's the best way for this components to interact with each other (factory, etc.)?
    Message was edited by:
    kmkiani
    Message was edited by:
    kmkiani

    There are 3 tiers in a Java EE application. (Presentation, Business, Integration).
    The BusinessDelegate in this scenario would be a Presentation-tier business delegate. This guy would interact with a Session Facade who lives on the Business-tier. The SessionFacade is the abstraction on the Business-tier and the Business Delegate is the abstraction on the Presentation-tier. It is these guys that have direct communication. This design enables low coupling between the actual implementations of each area. If done properly, you could go from EJB to Web Service to POJO business models without ever having to change anything in the Presentation-tier.
    These object-oriented design patterns are primarily for Enterprise applications with extensive Quality-of-Service requirements.
    In your scenario, the Presentation-tier would contain a MVC-based web application, i.e. Struts. The business model and business/domain requirements would be implemented in the Business-tier.
    Presentation Tier - Struts Web Application
    Business Tier - (EJB | POJO | WEB SERVICES) Application
    Integration Tier - (Relational Database | File System | XML Database | EIS)

  • Best practice to use Time capsule for back-up of 3 different products (MBP 15 OSX lion, MBP 13 OSX lion and MBA 13 OSX mountain lion)?. Only the MBP15 is back-up regularly.

    When I want to save data of the MBA13 on mountain lion (wireless) with time capsule, hois there any best practice to perform?
    After that, assuming that data are back-up, can we easily differentiate data in time capsule belonging to MBP15/13 and MBA13?

    Unfortunately, Apple left off the Ethernet port....the most important port in networking....on the MBA, so your first backup of the entire Mac will need to be done using wireless.
    That may take a day or two unless your MBA has a Thunderbolt port on it in which case you could add a Thunderbolt to Ethernet adapter and connect the MBA to the Time Capsule for the first backup using an Ethernet cable.  It will probably only take 3-4 hours or less doing it this way.
    Once you have the first complete backup done, other subsequent backups can be done using wireless since they will ony take a few minutes, on average.
    Both Macs will backup to the Time Capsule using Time Machine automatically. Backups will be kept completely separate, so one Mac will normally only be able to "see" its own backups.

  • Best Practices in use of ABAP for SRM and/or CRM Configuration

    I was wondering if there is a document that defines best practices for the use of ABAP with the installation and customization of SRM and/or CRM. Such as amount of ABAP coding typically required, and best practices around the use of ABAP for customization and configuration.
    Thanks.

    Hi, Johnson
    Sorry, Please don't mind, you are not at right place to ask the Question like this
    Please read "The Forum Rules of Engagement" before posting!  HOT NEWS!!
    Thanks and Regards,
    Faisal

  • Best Practices? Rendering to Flash for streaming web....

    I am always impressed with the flash based videos I see streaming on YouTube, FastCompany.Tv and other sites....
    My question... can you please either explain or point me in the right direction for streaming video best practices? Specifically, I am looking for info on best settings to produce the flash video (codecs and/or FCP render settings) and then what do people use as a flash player on their websites to show the end result.
    My goal is to create internal instructional videos for corporate training and then host them on my site (or streaming from Akamai). I would like people to be able to watch it in a flash player embedded on my site (and have it look good even if they click on a full screen button) or download to their iPod.
    Examples of what I like, but I don't know how to do:
    http://www.fastcompany.tv/video/getting-government-work
    Thank you in advance for your expertise and insight.
    -Steven

    I would like people to be able to watch it in a flash player embedded on my site (and have it look good even if they click on a full screen button) or download to their iPod.
    Use the H.264 setting for iPod in Compressor. The h.264 file will play in a JW Flash Player and it's able to be downloaded for iPod viewing.

  • Best practice DNS in VPN environment for Lync2013 clients

    So I do have those site2site VPNs to connect the small branch offices to the main office. Internal DNS makes sure, that the branch offices can acess all the servers/services in the main office with their domain.local namespace.
    In such a scenario will the Lync2013 clients connect through the VPN to the internal sites due to both lyncdiscover and lyncdiscoverinternal being available?
    Wouldn't it cause way less burden on the VPN routers if clients would simply go out to the internet and connect from the external side so all the Lync traffic does not have to be stuffed through the VPN pipe? I dont see the point to encrypt the traffice
    once more.
    Thanks for your suggestions about best practices!
    HST

      Hi,
    When users connect to the corporate network using a VPN client, Lync media traffic is sent through the VPN tunnel. This configuration can create additional latency and jitter because media traffic must pass through an additional layer of encryption and
    decryption. The issue is compounded when the VPN concentrator is busy.
    If you want to connect Lync server from public network you need to deploy an Edge server.
    The solution to force VPN traffic through the Edge Servers must allow external Lync clients connected through VPN, you can refer to the part of "Solution Configuration" in the link below:
     http://blogs.technet.com/b/nexthop/archive/2011/11/15/enabling-lync-media-to-bypass-a-vpn-tunnel.aspx
    Best Regards,
    Eason Huang
    Eason Huang
    TechNet Community Support

  • Best practice using keyword in subject for encryption

    I have setup an encryption content filter on my appiance to encrypt messages that are outbound, and have a subject header that begin with the keyword Secure inside brackets. [Secure]. My intent was to eliminate some fals positives by including the brackets. What ended up happening was for some reason ALL outbound messages were being encrypted.
    After some more testing it seems like the content filter is ignoring the brackets as a requirement, but I can seem to find anything in the online help to back this up.
    Can someone assist with verification of the requirements around using a keywork in the subject to allow users to encrypt oubound messages voluntarily?
    Best practices around achieving this if anyone has them would also be greatly appreciated.
    Thanks,
    Chris

    Hi Chris,
    While I would have to see the syntax of the filter in question to be 100% sure , it sounds as if your conditional variable is being treated literally. The content filters will except regular expressions so what this means is something like [Secure] could be seen as look for anything that contains
    an S, or a
    e or a
    c or a
    u or a
    r or a
    e
    Since you said begins with we would be looking for anything in the subject that begins with of of these letters. This is due to the fact that the brackets [ ] are special characters in regular expressions, thus they need to be escaped. That being said [Secure] would become \\[Secure\\], we use the \\ to escape the brackets. In content filters however, you can get by with using just one escape as the filter is smart enough to go ahead an enter the additional escape for you. So in the filter it would be something like begins with \[Secure\]
    Once you enter this in the condition for you filter it will display like the following,
    subject == "^\\[Secure\\]"
    I hope that helps.
    Christopher C Smith
    CSE
    Cisco IronPort Customer Support  

Maybe you are looking for

  • GEforce4 Game lockup

    Having some problems with my new Video Card. System Specs are: CPU= P4 2.4GZ 800FSB MOBO= MSI 865PE Neo2 VIDEO= GEForce4 TI 4200 8XAGP 128DRAM  (IRQ:16) PSU= 3.3:28A   5:30A HDD= WD 80GB 8MB Cache RAM= 1024 (2*512) MB 333DDR LG 16*DVD 48*CDRW combo F

  • Is this a bug? Stroke does not display for rectangles?

    Working in a room full of new iMacs with CS6, I used the rectangle tool to draw shapes on screen and the shapes would not completely display. It was almsost like other shapes were covering parts of the shapes so for example only 2 of 4 lines on a rec

  • Request.getParameter() from same page as HTML form

    I have a jsp page with a form. form contains a textfield, once the user submits the form, i want jsp scriptlet to get the value of the user input and carry out processing on it within the same page and give a result. problem is the first time the pag

  • How do I get MS Outlook to iCal and Contact

    I'm hoping anyone can help me with this puzzle. I have been using Microsoft Outlook 2003/2007 software on a PC. I have my address contacts and calendar events all stored on my Microsoft Outlook pc laptop. How do I get my contacts and calendar from Ou

  • Problem to transfer DVD on Apple TV

    Hello, I have a problem. When I put a DVD in my MacBook I'm not able to see the movie on my TV with Apple Tv. The screen is white and grey it doesn't work.