How to cope with surrogate id's after migrating to OWB10gR2

We are busy converting an OWB 9.2 environment to OWB 10gR2.
I already had problems with the way dimensions and cubes are used in OWB 10gR2.
Some of my dimensions will have a steep growth curve because of the level rows that are inserted into the new dimensions. These new rows exist to enable you to link cubes to dimensions at all levels and not only to the lowest level.
But when building the new mappings surrogate keys will also be used to make the join between dimension and cube but this again causes problems for existing reports.
In OWB 9.2 we prepared the cubes and dimensions in the staging area and then loaded them into the data warehouse.
We can still keep the existing columns in the newly defined dimensions we have to add the surrogate keys. But when loading data with the new dimensions and cubes the cubes will refer to the dimension with the new surrogate id’s and not with the old id’s which the business keys are now. So we are losing the relation between cube and dimension based on the business key and we have it now based on the surrogate key.
But the reports we have (made with Information Builders) are still working with the relationships based on the business id’s but since the foreign keys in the cube refer to the surrogate id’s instead of the bushiness id’s the reports will return without results.
I know I can resolve it by redesigning the reports but this is a lot of work and not the way we want to go.
My alternative solutions are
1.keep the new dimensions
but rename them
make views with the old dimension names
in the views switch the old business id and surrogate
the reports will then work with the views and the old joins will keep
on working
2.do not use dimensions and cubes
migrate the mappings to use tables , as a matter of fact this is what
you get after importing a 9.2 MDL into 10.2
no change in table names
the reports keep on working
Alternative solution 1 is rather dangerous because the column names are switched; this makes the model more difficult to understand
Alternative solutions can become a problem if we want to use slowly changing dimensions, we have to build them as we would have done in OWB 9.2, this leads to complex mappings.
Are there any other alternatives, can somebody advise for the best answer to these challenges

Well, couple of ideas here.
First off, do you HAVE to use levels in the dimension? For some of our dimensions, we just define a single "default" level, and tie all attributes to it. For example, with a time dimension, we'd define a default level, and then tie month_id, month_name, quarter_id, quarter_name, year_id, and year_name to that default level. This works out well - you basically get a "flat" dimension with all attributes on the row, thus preventing the generation of keys for the other levels. You'll still get a DEFAULT_LEVEL_KEY, but it will always equal the dimension key and you can just ignore the column.
FYI - I've just started playing with OWB 11gR1, and I discovered a few minutes ago that when setting up a dimension level, you don't appear to have to have the surrogate_id attribute turned on. Not sure if this works yet, will let you know.
Last and not least, I read your problem re: the business keys vs. surrogate keys to be "we used business keys instead of surrogate keys as the dimension PKs and fact table FKs on our initial DW, and now we have to migrate that...where OWB of course wants to use surrogates. Couple of thoughts. First off, I don't think you're going to get any auto-magic way to do this - using business keys instead of surrogate keys is definitely non-standard.
The one suggestion I would have is to do this: keep the original business IDs on the fact table as their own columns. Set them us as "measures" with a data type of varchar2 or whatever they already are. OWB will still of course create the "proper" FK columns based on the surrogate keys, but your users will still be able to join based on the "old" business id joins. The only real price you'd have to pay for this option is a) it takes more space per row on the fact table, and b) you'll probably have to do some indexing. OWB will build the proper bitmap indexes on the surrogate keys, but you'll probably want to add bitmap indexes on the business ID columns.
Hope this helps,
Scott

Similar Messages

  • How do i merge my user accounts after migration assistant

    Hello,
    How do I merge my user accounts after migration assistant from a time machine back up?

    I wonder how to plan what account and rights to have where?
    I had my old original account on my iMac, and then I "migrated" the data to a user account, in order to have some advantages with that setup. So everything worked approximately ok in this setup.
    And yesterday, I tried migrating to a new Mac, and suddenly I get aware of the fact that the rights on these two earlier accounts were important.
    And especially, after doing the migration to the new machine twice, until I got the Mail working, I am now totaly bewildered what solution to aim for? How to merge the two migrated admin accounts - keeping the original admin account is also important, since my file system har special rights for that one.
    Could anyone give me more ideas about how to proceed? I think I have working Mail on one of the two migrated account (with 100 000 mails or so, it seems so...). The other account is the one I would like to have.
    Also I think I want to use a normal user account, not an admin account for my daily use. And I have to see if things still work if I turn off the admin rights...
    Thankful for any advice!
    /groundliner

  • How to cope with a dead pixel

    Yes, I know, there's no fix for a dead pixel (is there?), and there are lots of maybe-fixes for a "stuck" pixel. Well, I have a dead pixel, super-dead, in a really terrible place (the lower right quadrant, just a tad above where, say, the iTunes ministore would be). I'm stuck with it. I can't do anything to get it fixed or replaced (can I?), so my question to the community is about dealing with it. How does one cope with something so aggravating? Are there some tweaks one can do to make, say, white web pages reverse their colors so the pixel is not as apparent? Are there other methods to get rid of it or make it less obvious? I'm going mad. Someone help. <<br>
    Macbook 2ghz   Mac OS X (10.4.7)   1GB RAM 120GB HDD

    Well , I do understand how crazy it can drive you when a pixel is dead. Everything was fine on my MB then about a month in I noticed a pixel that was dead, I too looked at every topic I could find on the subject, nothing helped. So I took my MB into my local Apple Store, they sent it off to repair. Of course Apple did not fix because "One pixel is not considered enough of a problem to repair". So the Apple Store actually ordered me a new screen and replaced it for free.
    I was very happy until I turned on the computer at the A.S. and once again it had a dead pixel in a different spot. they said that there was nothing they could do about it. They would not replace it again because it would screw up their store budget. So I'm learning to deal with it. Believe me it really *****, I'm a perfectionist at it really bugs me to see this flaw. Other than that my MB is great and I love it. Good Luck!

  • What to do with old iPhoto library after migration to Photos

    After migration to Photos my hard drive is maxed out. I want to move the 15gigs old iPhoto library to an external drive to free some space as some have suggested. Others here have said you can just trash the old library. There must be a reason 10.10.3  was designed to retain the iPhoto library. Trash it--really? Another suggestion was to put my 15 gigs on a flash drive. So the question is: does Photos need to be able to see the iPhoto library as it would if on an external drive connected to my Time Machine?

    Thanks for your reply but I'm still confused as to how I lost over 10Gigs of HD space after the migration. I understand the conversion is supposed to create more space, not less.  When I look in the pictures folder in Finder, I see 2 LIbrarys, iPhoto and Photos, each of them approx 15 Gbs. according to the Get Info menu.  I guess what I don't understand is if the iPhotos Library is not needed and is moved to an external drive why that wouldn't free that amount of space.
    Also, if "optimization" is supposed to move "some" high res images to the cloud to make room on the HD, how is it that I find myself with only 1gb left on my HD? What is the criteria that triggers "optimization" and how much room can we expect it to shoot for? There seems to be a variety of answers on this subject, many of them sound like guesses. Apple's explanation is vague about how it works, so hopefully someone has the accurate, in depth scoop on this.

  • Synchronize with database is disabled after migration

    Hi
    I have migrated from JDev 11.1.1.4 – JHs 11.1.1.3.35 to JDev 11.1.2.3 – JHs 11.1.2.1.28 and solved the problems and now everything is fine but one thing. I am not able to synchronize the entity objects with database! When I right-click on an entity object the “Synchronize with Database” option is disabled. This happens only for the applications that JHs is enabled for them. For normal ADF applications it works after migration. What is the matter?
    Cheers,
    Ferez

    Ferez,
    have you seen this thread in the ADF forum, Cannot 'Synchronize with database' my entity objects ?
    It mentions the existence of a bug that may cause this (though no reference number), and one user posted a workaround they were using.

  • Problem with Java Native Type after Migration from 7.0 to 7.1

    Hi,
    after Migration from Netweaver 7.0 to Netweaver 7.1 I get following error:
    FileDownload 'FileDownload.data': Context attribute 'PrintSelectedView.PdfToDownload.resourceInputStream' has the Java native type 'com.sap.tc.webdynpro.progmodel.api.IWDInputStream' and cannot be bound to this property. Hint: Remove the binding or bind a context element matching the property's type.     
    What have I done wrong?
    How could I fix this problem?
    Best regards,
    Peter

    Hi,
    thanks, this solved the problem.
    Thank you.
    Best regards,
    Peter

  • Problem with 8.1.6 after migrating to Tru64 5.1A

    We just migrate our OS from Tru64 4.0G to Tru64 5.1A.
    Our database are 8.1.6.3.1.
    After migration the OS we had problem (ORA-07445) with some of our database. It seems that the problem is related to direct IO and a workaround is to set the DISK_ASYNCH_IO parameter to FALSE.
    What is the impact of setting this parameter to false ? Should I expect a big decrease in performance ?
    Does anybody know if there is a patch for Tru64 5.1 that woudl correct this problem ?
    Thanks

    After having a closer look at metalink it seems to be obvious, that Forms 6i (8.0.6.) only supports user exits generated with the 8.0.6. precompiler.
    If this is the case new questions follow:
    Is there a way to get a 8.0.6. precompiler?
    Is it possible to connect to a 8.1.5. database with such a user exit? Does anybody have corresponding experiences?
    Our customer uses a 8.1.7. database. What about connecting to that database with the user exit?
    /Ralph

  • Issues with old calendar items after migration from SBS2008R1 to Windows 2012R2+Exchange 2013

    Hello,
    After migrating a customer from Windows Small Business Server 2008 R1 to Microsoft Windows Server 2012 (DC) and Microsoft Windows Server 2012 & Exchange 2013 there are a few issue regarding
    the GAL and old migrated calender items that we are unable to resolve:
    Old calender items which were Exported to PST when the Server 2008 R1 environment was still live were imported to the new mailbox of the user, located on the Exchange 2013 mail database. This
    seemed to be working just fine.
    The problems arise when a user tries to edit or re-send an old calender item. There actually are 2 problems:
    1. When resending an old calender item, OLD GAL entries are used (even when re-entering the contact details) and the emails are send to nobody and you get the mail back from postmaster
    2. When updating an old calender item you get an error that you do not have permissions, probably because you are NOT the 'old' Exchange user.
    Thanks in advance!

    Did you use Exmerge or similar to export?
    You may have an issue like that explained here.
    http://blogs.technet.com/b/sbs/archive/2009/05/21/cannot-reply-to-old-emails-or-modify-old-calendar-items-after-pst-mail-migration.aspx
    Robert Pearman SBS MVP
    itauthority.co.uk |
    Title(Required)
    Facebook |
    Twitter |
    Linked in |
    Google+

  • How to continue use Time Machine backup after migration?

    Hi,
    I just purchased a new mac and migrated using my Time Machine backup from my NAS.
    This went smooth but now I would like to keep this backup as base for my new mac.  Unfortunately, this does not seem to work as my mac tries now to create a new image.
    As I have pretty important documents on my previous backup, it would be nice if there would still be a way to continue using this one.  I saw somewhere that the migration assistant should ask if we want to continue based on this backup after migration but I did not receive this question.
    Is there still a way to solve this?
    Thanks in advance for your input!
    Kind regards,
    Fred

    Hi Leonie,
    Thanks for your link!  I am migrating from Leopard to Mavericks.  If I understand correctly, this is the reason why I cannot inherit from previous backup.
    The procudure to tweak seems pretty complicated though... not sure I will risk it.
    Thanks anyway!
    KR,
    Fred

  • Problems with new iMac (Alu) after Migration with the Bluetooth settings

    Hallo, at first, apology for my bad English
    I have bought a new iMac 24". After Migration of my datas from my "old" one (iMac 20" C2D) it works all very fine. The only thing that don´t work is the Bluetooth Settings. When i want to install a new device, there is no text in the Window, only a few characters...
    Can anyone help me??

    Try downloading and installing the 10.4.10 Combo update. Let the update run alone. Don't use the Mac for anything else while it installs the update, including Airport and Bluetooth. Use a wired keyboard and mouse if you can.
    Mac OS X 10.4.10 Combo Update v1.1 (Intel)
    Repair Disk Permissions after the update is installed and you have restarted the Mac.

  • Re-Installing PC, how to cope with cloud installation

    Hi, I need to re-install my Lenovo WS (WIN8.1). Back in the pre-cloud days I had simply to de-activate my Adobe software and to re-activate after operating system installation. How does that work with my cloud apps? What is the process today?
    Since this is more or less a common issue I was looking for some standard support database entry, but I was not able to find something of help.
    Thanks,
    Sven

    it's easier.  you can sign out before wiping your pc, but you don't need to.  with cc, you can even sign out of old 'dead' computers using a new computer.
    just do whatever you need to do and when ready to install cc, start downloading a cc app (after signing in with your subscriber adobe id) and the desktop app will install, https://creative.adobe.com/

  • How to cope with DTP getting inactive.

    Hi,
    When we are doing a system copy from a source system from production to test we have to restore the connection.
    This is followed by a replication. All the DTP's are getting deactivated. How can we prvent this?
    Activating all the DTP's gets us in trouble with the PorcessChains as the DTP ge new technical names..
    Is activating in Development and transporting all DTP's the only solution?
    Any information on this will be appreciated.
    Udo

    Hi.
    Did you follow the note 886102 to perform this system copy ?
    We have some issues with DTPs and BDLS process.
    Please check also if the following notes were applied in your system and then rerun this BDLS
    process afterwards.
    1139924    BI7.0(SP18): System names converted incorrectly
    1142908    70SP18: BDLS does not convert pseudo D versions
    1148403    70SP18: BDLS after early replication
    1149141    70SP18: BDLS: Improvements log
    1169659    Correction: Process variants not entered
    Thanks,
    Walter Oliveira.

  • How to cope with Out of Memory Errors

    Hi, I am distributing a desktop application to the general public written in Java 1.4.2. How much memory is required is approximately propertional to how many files they load so Ive tried to pre-emtp OutOfMemoryErrors by checking when memory usage is 95% and preventing the loading of additional files above this level with the following code:
    protected static void checkMemory()
            throws LowMemoryException
            if (Runtime.getRuntime().freeMemory() < Runtime.getRuntime().maxMemory() * 0.05)
                Runtime.getRuntime().gc();
                if (Runtime.getRuntime().freeMemory() < Runtime.getRuntime().maxMemory() * 0.05)
                    MainWindow.logger.severe("Memory low:" + Runtime.getRuntime().freeMemory() / 1024 / 1024 + "MB");
                    throw new LowMemoryException("Running out of memory");
        }but this code is not robust, sometimes users reports LowMemoryException when the user has only loaded a few files.
    I tried removing this code, but then user can get an OutOfMemory error whcih can cause problems with whatever code was running at the time and leave the application in an inconsistent state, if I just exit the application immediately it would be very annoying for users that are int he middle of something.
    I also have adjusted the -Xms and -Xmx settings but cannot decide on a suitable default.
    What I would ideally like the application to do is to extend its heap space as required upto the limits of the users machine, and if it reaches memory limits handle the lack of memory in a releiable manner allowing the user to continue using the application in a safe way

    Unfortunately the metadata is stored displayed within a JTable, so even if If I had in in a Database I think it would all have to be loaded into memory for display within the JTbale in a timely fashion.
    Anyway I think Ive found the problem with the original code, it was reporting memory low when almost all the allocated memory was being used but had'nt accounted for the fact that maxMemory limit had not been reached so more memory could be allocated.
    I think the correct code is:
    protected static void checkMemory()
            throws LowMemoryException
            if (Runtime.getRuntime().totalMemory()  -  Runtime.getRuntime().freeMemory() > Runtime.getRuntime().maxMemory() * 0.95)
                Runtime.getRuntime().gc();
                if (Runtime.getRuntime().totalMemory()  -  Runtime.getRuntime().freeMemory() >  Runtime.getRuntime().maxMemory() * 0.95)
                    MainWindow.logger.severe("Memory low:" + (Runtime.getRuntime().maxMemory()  - (Runtime.getRuntime().totalMemory()  - Runtime.getRuntime().freeMemory() ))/ 1024 / 1024 + "MB");
                    throw new LowMemoryException("Running out of memory");
        }

  • How to cope with "undefined is not an object" ?

    I'm tiptoeing towards a solution, but stepped on a thumbtack:
    The following snippet creates an error in the function GetTplParams
    var template = "DDD-BibFM-tpl.fm";  // located in script-dir var outFile  = "DDD-BibFM.fm";      // located in the source dir   oDoc = OpenTemplate (); alert ("Is template open now?"); function OpenTemplate () {   var tplDir, tplFile, openParams, openReturnParams, oFile;   tplFile = "E:\\_DDDprojects\\FM+EN-escript\\escript\\DDD-BibFM-tpl.fm" // SimpleOpen does not take the tplFile, but opens the dir of the active document //  oFile = SimpleOpen (tplFile, true);   openParams = GetTplParams ();    openReturnParams =  new PropVals();    oFile = Open (tplFile, openParams, openReturnParams);    return oFile; }   function GetTplParams() {  // =>>> "undefined is not an object" on line 22 var params, i;  // Change the params   i = GetPropIndex(params, Constants.FS_RefFileNotFound);    params[i].propVal.ival = Constants.FV_AllowAllRefFilesUnFindable;    i = GetPropIndex(params, Constants.FS_FileIsOldVersion);    params[i].propVal.ival = Constants.FV_DoOK;    i = GetPropIndex(params, Constants.FS_FontNotFoundInDoc);    params[i].propVal.ival = Constants.FV_DoOK;    i = GetPropIndex(params, Constants.FS_LockCantBeReset);    params[i].propVal.ival = Constants.FV_DoOK;    i = GetPropIndex(params, Constants.FS_FileIsInUse);    params[i].propVal.ival = Constants.FV_OpenViewOnly;    i = GetPropIndex(params,Constants.FS_AlertUserAboutFailure);    params[i].propVal.ival=Constants.FV_DoCancel;  /*   i = GetPropIndex(params,Constants.FS_MakeVisible );    params[i].propVal.ival=false;  */   return (params);  }
    (inserting JS code really has its quirks in this editor).

    Thanks Klaus - now it works!
    To Rick.
    The presented piece of code is part of a 're-furbishing' the FameMaker to EndNote 'connection.
    In a first step I collect temporary citations (such as [Daube, 1969, #123] ) from text, footnotes and tables into an array. This part already works fine (I had to postpone further development for a year now...).
    Then I write this data to a new document (which is created from the template - this is where the mentioned piece comes into play.
    This file then is saved as rtf to be worked off by the bibliographic application EndNote (or Citavi in another case) to resolve the temporary citation into formatted citations and the bibliography.
    After that the modified rtf is read in FM and the temp. citations in the FM-document/book are replaced by the formatted citations.
    The user then copies the bibliography (as text only) into his appropriate chapter/section and formats it to his liking.

  • How to cope with XMLAgg bug 4277241?

    Hi,
    I have hit the bug 4277241 , which was referred to her as well: Re: Important--Bug(ORA-22813) fixes in Oracle Database 10g  Release 10.2.0.4 for my 11.2.0.1 database.
    This is due to the result set being to big.
    Now i have read the metalink note and it says:
    xmlagg() with a GROUP BY can fail with ORA-22813 if the result is too large.
    This is normal and expected as there is a hard coded limit on the result
    size *BUT* this fix allows event 44410 to be set which uses a different method
    to evaluate the XMLAGG(). The event is disabled by default.
    NOTE: If the event is set to work around the 30K limit then LOB leakage
          can occur. This is expected and is due to the way the limit is
          avoided.
    Workaround
      Rewrite the query without using "group by" by using select distinct methodApparently using the event causes a memory leak which is ofcourse undesirable.
    So is the problem caused by using the GROUP BY in conjunction with XMLAgg or just by using the XMLAgg altogether ?
    If the answer is the GROUP BY then how does one rewrite a query without resorting to subqueries ?
    DISTINCT can only be used with the SELECT keyword, so the second XMLAgg used below can not be preceded by an DISTINCT.
    Using subqueries for every nest-level makes the query rather large and complicated , that is the reason for asking.
    The query below can be rewritten using subqueries, and thus avoiding the GROUP BY, but i am more interested in alternatives using DISTINCT or whatever.
    Ofcourse the query below doesn't give the error but does show what i mean:
    select
    xmlagg(
       xmlelement("department",
         XMLAGG(
           XMLElement("employee",
            xmlelement("first-name" , e.first_name),
            xmlelement("last-name", e.last_name)
              order by e.last_name
              d.department_name
            )as "xml"
            from  employees e 
                  join  departments d on (e.department_id = d.department_id)
                  join locations l on ( d.location_id = l.location_id)
            group by d.department_name
        ;

    Hi,
    Could XQuery be an option for you?
    SELECT XMLQuery(
    '<root>
      for $d in fn:collection("oradb:/HR/DEPARTMENTS")/ROW
      return element department {
        attribute name { $d/DEPARTMENT_NAME/text() }
      , for $e in fn:collection("oradb:/HR/EMPLOYEES")/ROW
        where $e/DEPARTMENT_ID = $d/DEPARTMENT_ID
        order by $e/LAST_NAME
        return element employee {
          element first-name { $e/FIRST_NAME/text() }
        , element last-name { $e/LAST_NAME/text() } 
    </root>'
    returning content
    FROM dual;

Maybe you are looking for

  • How do change the width of frameworkpage to 800px ?

    Hello experts, I want to change the width of Portal page to 800 px and display on center, like <a href="http://www.sinopec.com.cn/">http://www.sinopec.com.cn/</a> . Is there anyone had done this? Please give me some suggestion. BR, Jianguo Chen

  • Multiple Images Dragged on a JPanel

    I am trying to add multiple BufferedImages to a JPanel. I can get one BufferedImage on there fine by creating my own class which overrides JPanel, and then in the paintComponent(Graphics g) method, I have a method with g.drawImage(image, npx, npy, nu

  • System command execution from stored procedure

    Hello World, How to run System command from stored procedure ? For example : Delete a file running a programm, Is it possible ? H.M

  • OSB proxy secured with message level protection - No Protocol error

    I have an OSB business service that calls a JAX-WS service protected by OWSM policy wss11_message_protection_service_policy. The business service is protected by the corresponding client policy. The proxy service is secured by wss11_message_protectio

  • Cs5 not working following a reload

    I have had to build my system and when I reinstalled CS5 I got the following message "unable to start your subscription for Creative Suite Collection  Subscription Edition". I have been using this version for some time and have reloaded it previously