Help understanding fast and complete refresh

hi
-i have a table with 3 rows with col2='T' and 1 column with col2=null.
-my query(fast refresh) is "select * from table where col='T'"
-after first sync the 3 rows are on my mobile.
-then i set col2='T' in 4th row and make a sync with msync, but on my mobile are only 3 rows after the sync.
with complete refresh all 4 rows would be on my mobile.
but what will happen with changed data in my 3 rows on mobile at this complete refresh???
i want this 4th row on my mobile too without change the changes in my 3 rows on mobile.
thx
rico

If you are using fast refresh, then after changes are made on the server side,, the MGP compose process must run to build the out queue data for the client before the change will be available for download.
For tables with a fast refresh, if you look at the database, you will see when you publish, three triggers are created for the base table (<tablename>1/u/d) and a table CLG$<tablename> (there are others as well). when a change is made, it will fire the triggers. this will but the primary key of the changed record into the CLG$ table, and also create a record in mobileadmin.c$all_sid_logged_tables. this triggers the list of objects to process for the compose cycle.
the MGP compose sometimes stops processing for no apparent reason, so this may be a problem. Also for snapshots on tables, this works fine. If however the object is a view, or the column is derived (say from a function) the trigger just goes on the base table(s), if the base table data does not change, no data will be composed.
complete refreshes should not necessarily take longer to process (may be longer to download if a big table), but have the disadvantage of overwriting data just created on the client. this will not re-appear until the next synchronise (after it has been posted to the server) - this is the main disadvantage

Similar Messages

  • Delta and Complete Refresh

    Hi,
    What is significance of Cache Refresh (SXI_CACHE)?
    And what is difference between Delta Refresh and Complete Refresh?
    Thanks and Regards,
    Vijay Raheja

    Hi Vijay,
    You can view all cached objects by running transaction SXI_CACHE.  You can see whether the cache is up-to-date or not.  You can view all cached data and start a manual cache refresh.
    If you suspect that a change in a Integration Directory has not been replicated to the runtime cache, you can refresh the cache manually.  To refresh the cache in SXI_CACHE, from the menu choose XI runtime Cache.
    In a Delta cache all the data are not replicated.  Only the data that is changed from the last cache is updated. 
    Hope this clarifies.
    Regards.
    Praveen

  • Help understanding "KERN_INVALID_ADDRESS " and "NSLock Errors"

    Hello,
    Could someone indicate me a direction to help *finding the cause of* the following errors ?:
    (Adobe still does not give me an answer neither Apple do).
    – KERNINVALIDADDRESS
    – EXCBADACCESS (SIGSEGV)
    – _NSLockError()
    I am experiencing a lot's of trouble with Adobe CS3 software – Indesign CS3 specially – and I am trying to figure out why/where does these errors came from. I get the following log messages each time the application crash, when I close a document or when I quit the application (it's crashing just after). It tried many things from adobe support website (reinstalling applications, trashing prefs, removing fonts, applying adobe updates, stuff, etc...) without any success.
    *Please find below part of usual messages from the console I have just after the crash:*
    10.08.09 00:15:06 [0x0-0x1db1db].com.adobe.InDesign[1277] objc[1277]: Class EpicPanel is implemented in both /Applications/_Design/C
    S3/Adobe InDesign CS3/Adobe InDesign CS3.app/Contents/Frameworks/adobe_personalization. framework/adobe_personalization and /Applicat
    ions/_Design/CS3/Adobe InDesign CS3/Adobe InDesign CS3.app/Contents/Frameworks/adobe_registration.fr amework/adobe_registration. Usin
    g implementation from /Applications/_Design/CS3/Adobe InDesign CS3/Adobe InDesign CS3.app/Contents/Frameworks/adobe_registration.fr a
    mework/adobe_registration.
    10.08.09 00:16:28 Adobe InDesign CS3[1277] * -[NSConditionLock dealloc]: lock (<NSConditionLock: 0x2b84090> '(null)') deallocated
    while still in use
    10.08.09 00:16:28 Adobe InDesign CS3[1277] * Break on _NSLockError() to debug.
    10.08.09 00:16:28 Adobe InDesign CS3[1277] * -[NSLock unlock]: lock (<NSLock: 0x2b84010> '(null)') unlocked when not locked
    10.08.09 00:16:28 Adobe InDesign CS3[1277] * Break on _NSLockError() to debug.
    *or also… (part of log only)*
    Process: Adobe InDesign CS3 [318]
    Path: /Applications/_Design/CS3/Adobe InDesign CS3/Adobe InDesign CS3.app/Contents/MacOS/Adobe InDesign CS3
    Identifier: com.adobe.InDesign
    Version: 5.0.4.682 (5040)
    Code Type: X86 (Native)
    Parent Process: launchd [148]
    Date/Time: 2009-08-06 17:20:58.171 +0200
    OS Version: Mac OS X 10.5.7 (9J61)
    Report Version: 6
    Anonymous UUID: 50C2763C-540E-48FB-B9A6-33FE38FA7327
    Exception Type: EXCBADACCESS (SIGSEGV)
    Exception Codes: KERNINVALIDADDRESS at 0x00000000c98ba627
    Crashed Thread: 0
    Thanks in advance for your message if you have an idea or something...
    Rico

    Thank you Kappy
    as it's said:
    "If a thread tries to unlock theLock, but it’s already unlocked, undesirable things may happen"
    Undesirable things happen because it crash my application all the time. As I'm no developer either, I'm trying to understand what can cause this conflict (If it's something in my OS X or indesign configuration), but such logs seems too difficult for common user to undertand).
    I've read that somewhere in the apple developer site too. I think that's what append here.
    I'm wondering if it's not coming from the "adobe registration frameworks" (I've no idea what it is) because my crash logs mention this quite often.
    I've also read that KERN INVALID ADDRESS can be cause by bad RAM, but I've test it with TechTool Deluxe/Rember, seems ok.
    Thanks anyway,
    Rico

  • Need help understanding profiles and color management

    I made the big leap from inexpensive inkjets to:
    1 Epson 3800 Standard
    2 Spyder3Studio
    I have a Mac Pro Quad, Aperture, PS3, etc.
    I have a steep learning curve ahead, here's what I've done:
    1 Read a lot of books, watched tutorials, etc.
    2 Calibrated the monitor
    3 Calibrated the printer several times and created .icc profiles
    What I've found:
    1 The sample print produced by Spyder3Print, using the profile I created with color management turned off in the print dialog, looks very good.
    2 When I get into Aperture, and apply the .icc profile I created in the proofing profile with onscreen proofing, the onscreen image does not change appreciably compared with the no proof setting. It gets slightly darker
    3 When I select File>Print image, select the profile I created, turn off color management and look a the resulting preview image it looks much lighter and washed out than the onscreen image with onscreen proofing turned on.
    4 When I print the image, it looks the same as was shown in the print preview...light and washed out, which is much different than what is shown in edit mode.
    5 When I open PS3 with onscreen soft-proofing, the onscreen image is light and washed out...just like displayed in PS3 preview. If I re-edit the image to look OK onscreen, and print with the profile and color management turned off, the printed image looks OK.
    So, why am I confused?
    1 In the back of my simplistic and naive mind, I anticipated that in creating a custom printer profile I would only need to edit a photo once, so it looks good on the calibrated screen, and then a custom printer profile will handle the work to print a good looking photo. Different profiles do different translations for different printers/papers. However, judging by the PS work, it appears I need to re-edit a photo for each printer/paper I encounter...just doesn't seem right.
    2 In Aperture, I'm confused by the onscreen proofing does not present the same image as what I see in the print preview. I'm selecting the same .icc profile in both locations.
    I tried visiting with Spyder support, but am not able to explain myself well enough to help them understand what I'm doing wrong.
    Any help is greatly appreciated.

    Calibrated the printer several times and created .icc profiles
    You have understand that maintaining the colour is done by morphing the colourants, and you have understood that matching the digital graphic display (which is emissive) to the print from the digital graphic printer (which is reflective) presupposes a studio lighting situation that simulates the conditions presupposed in the mathematical illuminant model for media independent matching. Basically, for a display-to-print match you need to calibrate and characterise the display to something like 5000-55000 kelvin. There are all sorts of arguments surrounding this, and you will find your way through them in time, but you now have the gist of the thing.
    So far so good, but what of the problem posed by the digital graphic printer? If you are a professional photographer, you are dependent on your printer for contract proofing. Your prints you can pass to clients and to printers, but your display you cannot. So this is critical.
    The ICC Specification was published at DRUPA Druck und Papier in Düsseldorf in May 1995 and ColorSync 2 Golden Master is on the WWDC CD for May 1995. Between 1995 and 2000 die reine Lehre said to render your colour patch chart in the raw condition of the colour device.
    The problem with this is that in a separation the reflectance of the paper (which is how you get to see the colours of the colourants laid down on top of the paper) and the amount of colourant (solid and combinations of tints) gives you the gamut.
    By this argument, you would want to render the colour patch chart with the most colourant, but what if the most colourant produces artifacts? A safer solution is to have primary ink limiting as part of the calibration process prior to rendering of the colour patch chart.
    You can see the progression e.g. in the BEST RIP which since 2002 has been owned by EFI Electronics for Imaging. BEST started by allowing access to the raw colour device, with pooling problems and whatnot, but then introduced a primary ink limiting and linearisation.
    The next thing you need to know is what colour test chart to send to the colour device, depending on whether the colour device is considered an RGB device or a CMYK device. By convention, if the device is not driven by a PostScript RIP it is considered an RGB device.
    The colour patch chart is not tagged, meaning that it is deviceColor and neither CIEBased colour or ICCBased colour. You need to keep your colour patch chart deviceColor or you will have a colour characterisation of a colour managed conversion. Which is not what you want.
    If the operating system is colour managed through and through, how do you render a colour test chart without automatically assigning a source ICC profile for the colourant model (Generic RGB Profile for three component, Generic CMYK Profile for four component)?
    The convention is that no colour conversion occurs if the source ICC device profile and the destination ICC device profile are identical. So if you are targetting your inkjet in RGB mode, you open an RGB colourant patch chart, set the source ICC profile for the working space to the same as the destination ICC profile for the device, and render as deviceColor.
    You then leave the rendered colourant test chart to dry for one hour. If you measure a colourant test chart every ten minutes through the first hour, you may find that the soluble inkjet inks in drying change colour. If you wait, you avoid this cause of error in your characterisation.
    As you will mainly want to work with loose photographs, and not with photographs placed in pages, when you produce a contract proof using Absolute Colorimetric rendering from the ICC profile for the printing condition to the ICC profile for your studio printer, here's a tip.
    Your eyes, the eyes of your client, and the eyes of the prepress production manager will see the white white of the surrounding unprinted margins of the paper, and will judge the printed area of the paper relative to that.
    If, therefore, your untrimmed contract proof and the contract proof from Adobe InDesign or QuarkPress, or a EFI or other proofing RIP, are placed side by side in the viewing box your untrimmed contract proof will work as the visual reference for the media white.
    The measured reference for the media white is in the ICC profile for the printing condition, to be precise in the WTPT White Point tag that you can see by doubleclicking the ICC profile in the Apple ColorSync Utility. This is the lightness and tint laid down on proof prints.
    You, your client and your chosen printer will get on well if you remember to set up your studio lighting, and trim the blank borders of your proof prints. (Another tip: set your Finder to neutral gray and avoid a clutter of white windows, icons and so forth in the Finder when viewing.)
    So far, so good. This leaves the nittygritty of specific ICC profiling packages and specific ICC-enabled applications. As for Aperture, do not apply a gamma correction to your colourant patch chart, or to colour managed printing.
    As for Adobe applications, which you say you will be comparing with, you should probably be aware that Adobe InDesign CS3 has problems. When targetting an RGB printing device, the prints are not correctly colour managed, but basically bypass colour management.
    There's been a discussion on the Apple ColorSync Users List and on Adobe's fora, see the two threads below.
    Hope this helps,
    Henrik Holmegaard
    technical writer
    References:
    http://www.adobeforums.com/webx?14@@.59b52c9b/0
    http://lists.apple.com/archives/colorsync-users/2007/Nov/msg00143.html

  • Materialized view (creating and complete refresh)

    Good day.
    I have a huge query that is used to create a materialized view. The query consist of 9 joins, a lot of aggregations, some subqueries, so we can say it is rather huge. The query itself executes for about 30 seconds and returns about 200 rows. The materialized view creates and refreshes for more than 30 minutes. Can someone please explain me the mechanism of materialized view creation which causes such bad performance.
    We use Oracle Database 9.2.0.4.
    Thank you in advance.

    I've fount the solution. May be it will be useful for someone not regarding that 9.2 database is less used today than 10.2.
    I studied carefully the plan of the query and the plan of insert statement that is used when creating materialized view and found the cause of trouble. Insert statements generates VIEW PUSHED PREDICATE and BITMAP CONVERSION FROM ROWIDS when parsing. Bitmap conversion can be removed by setting environment parameter btree_bitmap_plans to false (a well-known issue), but I decided not ot change the production environment. The pushed predicate can be removed by using hint NO_PUSH_PRED to each subquery used in a materialized view. This step reduced the time of materialized view creation and execution to about a minute.
    Thank everyone who tried to help.

  • Help understanding itunes and sync, itunes 10.5.0.142 does not import

    ok, first, im an old phart at a veterans home, so keep that in mind! Lol.
    Im not sure I understand Itunes.
    I thought it was supposed to sync, or keep on your ipod what you have in itunes library.
    In other words, if I have a play list and I delete a few songs from it, and add newer ones, when I sync my Ipod Touch, it should make the Ipod Touch play list, be like the one in intunes and delete the songs I deleted and add the songs I added  - - Am I correct in that assumption?
    Second, this latest ITunes version;  I tried adding files, but they do not show up as recently added. Yesterday I added 200 and only 122 showed up on the list.
    Third, if I want to wipe out my entire MP3 library on Itunes, but keep my videos and stuff, how can I do this so that I can re-import stuff afterwards?
    Thanks for any help.

    I had the same problem, but with an earlier version of iTunes, but when I upgraded it the problem solved automatically

  • Help understanding ERPi and planning

    Hi All,
    Planning/Essbase. Version is 11.1.2.
    Can you please help me understand the role of the ERPi adapter. I have hyperion planning that sits on an essbase cube. We load data and metadata from E-Business Suite 11. From forms and reports I want to drill back to the data in EBS. I have installed and configured the ERPi adaptor and the scenarios in ODI
    Question 1)
    Do I need to load data and metadata to be able to drill-through.
    Question 2)
    I have set my target and source systems - where do I configure the "Drill Through"
    thanks in advance for any help offered.

    Hi There,
    To answer your questions...
    1. This part of the process from what I've seen is integral to the product working - so the answer is yes
    2. Drill through is defined in a number of places one of those is in the "Target Application Registration" within ERPi and there is also a flag within the the adapter.
    Oracle have produced some good step by step documentation on this, please check metalink for note ID 951369.1 and this will take you through the process.
    HTH
    Mark

  • Need help understanding soundtracks and video

    Hi,
    I write music for myself (http://www.youtube.com/jgelhaar or http://www.myspace.com/bonjimmy) and I just saw the movie "The Holiday". In the movie Jack Blacks' character writes music for video. My question is what kind of software is handling this type of thing in the real world? Obviously I don't care so much about the PC side...
    I have a contact that makes videos for YouTube and he's used my music on some of them (example is here: http://www.youtube.com/watch?v=9Zf3ET52BvA ) But in these cases, he just takes something that I've done and then puts it on his video as a sound track. I think it would be much better for me to write the music specifically for the video instead of the other way around. So I need to know what kind of software one would use for that purpose. With that, it would be nice to know what kind of a rig would need to be put together. Right now, I only have a couple of G4's at my disposal, but I am in the market for a new system once I can put the funds together. :/
    Any help or comments would be greatly appreciated.
    Jimmy

    I figured the G4 wouldn't be able to hack it and I plan on getting a new system for this.
    So when you're composing in Logic, is it a recording application that also shows the video or does Logic have to sync with something else for the video?
    Now that you've assured me that Logic 8 is the right place for me, I will do more reading about it. Thanks a lot for your comments.

  • Need help understanding JPA and detached entities.

    I keep getting the dreaded "Cannot Persist detached object" and I cannot understand why EclipseLink is even trying to persist the object.
    The following is a general scenerio if the issue:
    1. I have 3 objects "Parent", "Child", and "Agency".
    2. I create a new "Agency" and persist it using em.persist(agency).
    3. The "Parent" object contains a OneToMany relation to "Child" and "Child" has a reference to "Agency".
    4. So I do something like:
    agency = new Agency();
    em.persist(agency);
    child = new Child();
    child.setAgency(agency);
    parent = new Parent();
    parent.addChild(child);
    em.persist(parent);
    I have Cascade.ALL on the OneToMany relation so I expect that persisting the parent will also persist the child. This part works, however for some reason the agency is trying to be saved and that's were I get the error.
    Now if agency was not already persisted, everything works fine. Parent, Child, and Agency all get persisted.
    Since there isn't really a way to merge() agency how do I handle this issue?
    I don't really understand all the clones and how merge works, so I don't have a good grasp on how cascade works itself through the objects? I did step through the code in the debugger and it simply wants to register agency as a new object even though the primary key is set.

    Hello,
    I believe what you are showing and trying is just confirming what I mentioned in my previous post. The specification does not allow you to persist a detached entity and forces an exception to be thrown. When you have a detached entity referenced through a relation marked with cascade persist and then call persist on the owning object, it is the same thing as calling persist on the detached object directly - you will get an exception. If the relation is not marked cascade persist, and agency is new then you definetely will get an exception - without the provider finding out and throwing its own exception, the child object would have tried to insert a foreign key to an agency that doesn't exist.
    Not marking the relation as cascade persist or using find will only work if the agency exists and was previously persisted. The problem then you mention is that you might have made changes to the detached agency or it may not already exist - the JPA provider only finds changes made to managed objects. Merge can be used when you do not know if it exists, and it will pickup changes made to the detached object.
    Solutions are:
    1) agency = em.merge(agency);
    child.setAgency(agency);
    or 2) em.merge(parent);
    Solution 1 might be more efficient, since merge on new objects could require more resources than persist due to existence checking requirements.
    Best Regards,
    Chris
    ...

  • Need a little help understanding java and /etc/profile.d/***

    A few days ago I installed jre-6u10-1 and jdk-6u10-1 via pacman on my desktop machine; for the web plugin.
    The packages installed in /opt/java/...
    I later installed jedit, via pacman.  It runs fine as it (/usr/bin/jedit) is hardcoded to look for java in /opt/java/...
    Today, I installed jedit (pacman -S jedit) on my laptop which did not have any java installed.
    Pacman automatically installed openjdk6-jre-6u10-1 as a prerequisite.
    Openjdk installs to /usr/lib/jvm/..., therefore jedit fails as /opt/java... is non-existent.
    I suppose Sun's java and the opensource version are installed in different locations to keep from overwriting each other.
    I could easily edit /usr/bin/jedit to point to the openjdk location and it would work.
    But I'm a little uncertain of the following three files:
    Sun's java installs /etc/profile.d/jre.sh and /etc/profile.d/jdk.sh
    which do the following (respectively):
    export PATH=$PATH:/opt/java/jre/bin
    if [ ! -f /etc/profile.d/jdk.sh ]; then
    export JAVA_HOME=/opt/java/jre
    fi
    and
    export J2SDKDIR=/opt/java
    export PATH=$PATH:/opt/java/bin
    export JAVA_HOME=/opt/java
    The opensource java package installs /etc/profile/openjdk6.sh
    which does the following:
    export J2SDKDIR=/usr/lib/jvm/java-1.6.0-openjdk
    export J2REDIR=$J2SDKDIR/jre
    export JAVAHOME=/usr/lib/jvm/java-1.6.0-openjdk
    #export CLASSPATH="${CLASSPATH:+$CLASSPATH:}$J2SDKDIR/lib:$J2REDIR/lib"
    So I am left with a few options:
    1.  edit /usr/bin/jedit to look in the opensource location
    2.  edit /usr/bin/jedit to use the $J2SDKDIR variable when looking for java
    3.  update the $PATH via /etc/profile.d/openjdk6.sh to include java's location
    I think all three methods would work, but I have limited experience with java and would like some input as to which proposed solution is the best going forward.
    What methods are other java apps using to find java?
    Thanks
    (edit: more descriptive title)
    Last edited by cjpembo (2008-11-11 04:24:47)

    I think the best option would be to ask the maintainer to modify the jedit-bin file, and have it launch
    $J2SDKDIR/bin/jre/java cp $CP -Djedit.home="/usr/share/jedit" -mx${JAVA_HEAP_SIZE}m org.gjt.sp.jedit.jEdit $@
    That way, it should work with both java implementations (completely untested, btw.)

  • Help understanding "delete and move"

    I've recorded a 20 minute Podcast, basically one long stream of me talking. I'm trying to figure out a way to cut out all the "ums" and "uhs", but moving the rest of the audio stream over so there isn't a big pause where I've cut out the audio.
    The closest I can find to perhaps doing that is "delete and move" but it doesn't seem to work when I highlight the "um" waveform in the editing window. Delete and move isn't an available option there. If I just "delete" the section, it doesn't close the gap and I have to move the whole subsequent wave file manually to the right to close the gap.
    Anyone have an idea how to make this a little easier?
    Thanks!

    I need to to this with my guitar recording input at times. You need to utilize the split and join tools. If you were to place the time-line at the beginning of the "um" or "uh" and command T to split the section which you highlighted green, then move the time-line to the end of the "um" or "uh" and command T as well making sure it is still bright green. Then click off the section to unselect green it and then click on the sectioned area and highlighting it, delete this section making sure this is the only area bright green to delete. Then bring the two sections together and join them. Joining requires that the two sections be touching and with bothe sections highlighted or selected, command J and now the deleted section is gone and the time differential with it. From here, play around with pause length and insert pause periods as will. This seems rather verbose but that is my way of explaining a purely visual thing.

  • Help understanding mail and MobileMe

    I hope this isn't too silly a question.
    I have my (MobileMe) mail set up in mail.app as an IMAP account. Everything works great, except when I move folders off of .MAC and drop them into ON MY MAC (for storage) they wont arrange themselves alphabetically. I am not quite certain how they are ordered (it seems to be last in shows up toward the bottom). Is there a way to fix this? I have a lot of mailboxes (one per client) and would like them to be organized alphabetically.
    Any thoughts?

    K/C R,
    In the mail display you can sort by any field displayed by pressing the title.
    ie From/Subject/Date Received, etc.
    It will toggle between ascending and descending order.
    Edit: Sorry, I might have misinterpreted your question. Why don't you leave them all in the Inbox and setup smart folders for each one. My Smart folders are all display alphabetically.
    Cheers
    Message was edited by: captfred

  • Materialized view fast refresh or complete refresh?

    Hi Guys,
    Need help here.
    We were ask to provide recommendation regarding the replication of our Production schema to Reporting Schema.
    Here are the conditions:
    1. Production schema and Reporting schema resides in one database server, only in different instance.
    2. Tables to replicate is more or less 300 tables.
    3. They only want to replicate changes that happen on the table on the said day.(insert,update)
    4. Replication will only happen on the end of business hours(end of day).
    What we have in mind is Materialized view.
    What would be best to use, fast refresh or complete refresh?
    What will be the effect on performance?Resources?
    Or will there be any other method?
    Appreciate your reply on this.
    Thanks,
    Nina

    Using MV (Materialized views) is onle of the possible way how to do what you are asked for.
    Fast refresh of MV is faster than complete refresh because it's based on "deltas" from last refresh. But it require MV log on table in primary DB. This MVLog consume same space in tablespace. And maintainig this MVlog during DML operation on table add some overhead to DML operation -> it's take a little bit longer time to complete.
    How much space is consumend by MV log depends on many factors:
    1/ number of DML changes on table
    2/ number of consumers (there can be more than on MV based on same table using same MV log)
    3/ period of MV refresh
    Plus of using MV is that it is very simple and you don't need no special license to use them.
    Other possible option is Golden gate (Streams previously)
    Next option is procedural replication - you have to write your own replication mechanism.

  • OLAP Re-Aggrigation functionality or Alternative for complete refresh

    I have a question related with reaggrigation.
    My scenario:
    1) I have Product Dimension (dim) and Sales Cube.
    2) In Sales Cube, product dim is used.
    3) In first time, i have complete refresh of dim and cube.
    4) But if i modify data in Product dim for one of hierarchy and complete refresh of dim. So new data will available in product dim and after that i have to complete refresh of Sales cube to get reflect new dim hierarchy data.
    5) My question is, if i have changed data in dim for one of hierarchy and complete refresh. So is there any way that i can get this data in cube without complete refresh? Is there any re-aggregation functionality in oracle OLAP cube to get data reflect in cube without complete refresh?
    Please answer soon?
    Thanks in adv.

    Try enabling the materialized view logs on source star schema (dim/fact tables) i.e. create all objects as advised by Relational Schema Advisor in Materialized Views tab of cube ... sales cube in your case.
    This will enable you to achieve a compromise - F - Fast Refresh or ? - FAST SOLVE capability automatically ... (w/o need to code/alter your maintenance scripts in response to changing hierarchy nodes)
    a) If only facts have changed, then the cube will be refreshed via FAST REFRESH (only modified partitions of cube) will be loaded.
    b) If dimension/facts have changed, then worst case scenario is a FAST SOLVE in which the entire table is reloaded but the cube solve/aggregation will still be incremental (involving a recalc of the modified/"relevant" part of the hierarchy alone). Doing this is quite simple refesh the cube via call to DBMS_CUBE.REFRESH passing '?' for method.
    => Presence of materialized view logs etc will ensure that the relational load to cube is optimal (if loading a section of fact table is sufficient it will do so).
    => Olap materialized view related metadata will ensure that the olap side post load aggregation/solve is optimal (if only a section/branch of hierarchy is needed to be recalculated then it will only re-aggregate the affected portion of hierarchy/cube)
    c) Try to control your etls to ensure that dimension hierarchy change is a low frequency occurrence => Load facts multiple times in a day from source system (every hour if needed) but allow resync or reload of dimension hierarchy tables only once a day if its ok with business users.
    More details:
    1) Adding Materialized View Capability to a Cube: http://docs.oracle.com/cd/E11882_01/olap.112/e17123/cubes.htm#OLAUG9156
    2) DBMS_CUBE.BUILD procedure: http://docs.oracle.com/cd/E11882_01/appdev.112/e40758/d_cube.htm#ARPLS65742
    HTH
    Shankar

  • Snapshot Refresh (How to stop COMPLETE refresh and run FAST refresh)?

    Hi,
    I have a snapshot refresh executed as COMPLETE which is taking very long. When I try to kill this and try to run a FAST I get:
    ERROR at line 1:
    ORA-12057: materialized view "PORTALSNP1"."V21_BILLING_ACCOUNT" is INVALID an must complete refresh
    How can I resolve this to stop the COMPLETE refresh altogether and be able to run the FAST refresh.
    Also is there a way to get the time it will take to complete the running snapshot refresh?
    Please and thankYou!
    Regards,
    A

    You don't resolve it ... you drop the materialized view. Then you create a materialized view log. Then a properly coded MV.
    http://www.morganslibrary.org/library.html
    bookmark this link
    then look up "Materialized Views and "Materialized View Logs"
    The log must be created first.

Maybe you are looking for

  • Error while activating the info object.

    Hi, While activating one of the info objects, i am getting the following error message: Characteristic 0BP_GRP: Cannot switch off InfoProvider property Message no. R7B289 Diagnosis The characteristic receives master data from other objects (for examp

  • RMAN Obsolete

    Hi Guys, I'm having 15 days of backup in RMAN Catalog and my initial RMAN config is as per below: CONFIGURE RETENTION POLICY TO REDUNDANCY 10; Since I only want to keep the backup for latest 10 days, so I have changed the config to: CONFIGURE RETENTI

  • What is the best external drive for Macbook Pro?

    Migration Assistant is not moving files from my Macbook OS X 10.5.8 to My new Macbook Pro just purchased. Apple Support suggested I transfer thru an external drive. They suggested a 1 TB. They said it would be sufficient for migration and afterward f

  • Pass/Fail issue with Quiz and Click Buttons

    Got a complicated one for ya! I have a project that has about 100 quiz questions.  I want to add slides with an image for backgrownd, a voice over and a "Click Button" to be used like a Simulation.  If the "Click Button" is not clicked on, I want the

  • Set Customerwise Debtor accounts in Accounts Receivable

    Hi All, I need to set customerwise debtor accounts when a transaction is created in accounts receivables. That is when I enter a transaction for customer A, then account debited must be customer A debtor account. Can I do this in oracle ebusiness sui