Late arriving dimensions

Guys,
I have a situation where i need to do late arriving dimensions.
This is the process flow which iam trying to do:-
1)Load the data into load tables from files(they contain both fact/reference data)
2)load the data into dimension tables from load tables
3)load the the data into fact tables by joining to dimension and load tables.
In case if they are any late arriving dimensions how to load them?
which interfaces i have to change or create and what i have to change?
Cheers
VAS

First question is, are you loading the facts without dimension records (your step 3) or are these getting left out?
I assume you mean you have 'late' dimension records arriving after facts. You can either :
Load the facts with 'unknown' dimension keys but keep the business key in order to update these FK's with the late dimension FK once its loaded.
or
Create a 'stub' in the dimension , or placeholder record , loaded from the Fact table, then come back and update this stub when you have the rest of the dimension attributes.
Option 2 would be prefered as you dont have to add any business keys to the fact table.

Similar Messages

  • Late Arrival with +tive Time Recording

    Hi,
    I need a solution for this scenario. I have Positive time recording and i want to apply the following rules on monthly basis.
    apply the following rules (within a month)
    1st Time Lateness beyond 30 minutes ==> 1 hour deduction
    2nd Time Lateness beyond 30 minutes ==> 2 hours deduction
    3rd time (and thereafter) Lateness beyond 30 minutes ==> 4 hours deduction
    deductions cannot exceed 5 days a month
    after that, set the employee on error, so that action from the time administrator will be required.
    I need solution for this scenario and need help for write a PCR.
    Thanks in advance.

    Hi,
    I need a solution for this scenario. I have Positive time recording and i want to apply the following rules on monthly basis.
    apply the following rules (within a month)
    1st Time Lateness beyond 30 minutes ==> 1 hour deduction
    2nd Time Lateness beyond 30 minutes ==> 2 hours deduction
    3rd time (and thereafter) Lateness beyond 30 minutes ==> 4 hours deduction
    deductions cannot exceed 5 days a month
    after that, set the employee on error, so that action from the time administrator will be required.
    I need solution for this scenario and need help for write a PCR.
    Thanks in advance.

  • How to Implement Dimension Operator in OWB

    Hi,
    Actually I am new to OWB and I am confused about the use of dimension operator and cube operator
    I want to know how data is mapped with dimension operator as source and target(in different cases) and also how the cube operator is attached to the dimension operator..
    Bottom line is what is the use of dimension operator and cube operator in mapping.
    Please help me out with this..its urgent
    Thank you
    Edited by: rishiraj on Jan 3, 2012 11:10 PM

    The dimension and cube operator encapsulate a lot of standard ETL for loading dimension and cubes (semantics on top of implementing tables). For example the operators handle hierarchy loading, surrogate key management, slowly changing dimensions, late arriving facts and all sorts. In many tools this kind of behavior has to be manually developed by users.
    You can expand the dimension operator and see under the hood to look at the code that is produced.
    Cheers
    David

  • Does Position Of Cube Dimension Matter While Adding it as Measure

    I've created a cube on the Adventure Works Database. And when tried to process the Cube i get error
    Errors in the OLAP storage engine: The attribute key cannot be found when processing: Table: 'dbo_DimProduct', Column: 'ProductKey', Value: '1'. The attribute is 'Product Key'. Errors in the OLAP storage engine: The attribute key was converted to an unknown
    member because the attribute key was not found. Attribute Product Key of Dimension: Dim Product from Database: Analysis Services Project, Cube: Adventure Works DW, Measure Group: Dim Product, Partition: Dim Product, Record: 1. Errors in the OLAP storage engine:
    The process operation ended because the number of errors encountered during processing reached the defined limit of allowable errors for the operation. Errors in the OLAP storage engine: An error occurred while processing the 'Dim Product' partition of the
    'Dim Product' measure group for the 'Adventure Works DW' cube from the Analysis Services Project database.
    This is faced When Dimproduct Dimension was added first and later Dimtime Dimension.
    I delete Dim Product Database Dimension and have Added it again as both Measure and Cube Dimension and it process succesfully.
    Only Difference i've Found is now Dimproduct is listed last in cube dimensions.
    Question - Does position of Cube Dimension matters when adding its a Measure?
    And what is causing this error?
    HS

    Hi HS,
    Since this issue is related to Analysis Services, I will move this thread to Analysis Services forum. Sometime delay might be expected from the job transferring. Your patience is greatly appreciated. 
    Thank you for your understanding and support.
    Regards,
    Katherine Xiong
    Katherine Xiong
    TechNet Community Support

  • Random mail arrival failure. SMTP log reports arrival, user never gets it

    Our 10.5.4 Mail server has become progressively less and less reliable over a period of months. I can reproduce repeatable (but inconsistent) instances of mails sent to us and failing to arrive. The weirdness even extends to emails sent from outside to multiple recipients on our server *in the same email*, and where some of the recipients get it and the others don't (while the non-recipients are randomly receiving other mails OK). Or, an external sender sending an email while I'm on the phone with him, it failing to arrive (and never does), and then a resend half an hour later arrives immediately.
    Needless to say, confidence in our mail server is plummeting.
    The OS-X Server mail log *actually records a mail arrival event* even when the user never gets the email, thus:
    Aug 6 13:15:42 dreadnought postfix/smtpd[14462]: connect from sonar.wycliffe.nsw.edu.au[192.168.9.100]
    Aug 6 13:15:42 dreadnought postfix/smtpd[14462]: 39D248C515F: client=sonar.wycliffe.nsw.edu.au[192.168.9.100]
    Aug 6 13:15:42 dreadnought postfix/cleanup[14464]: 39D248C515F: message-id=<[email protected].l ocal>
    Aug 6 13:15:42 dreadnought postfix/qmgr[100]: 39D248C515F: from=<[email protected]>, size=29520, nrcpt=1 (queue active)
    Aug 6 13:15:42 dreadnought postfix/smtpd[14462]: disconnect from sonar.wycliffe.nsw.edu.au[192.168.9.100]
    Aug 6 13:15:43 dreadnought postfix/smtpd[14471]: connect from localhost[127.0.0.1]
    Aug 6 13:15:43 dreadnought postfix/smtpd[14471]: 0BBFE8C5170: client=localhost[127.0.0.1]
    Aug 6 13:15:43 dreadnought postfix/cleanup[14465]: 0BBFE8C5170: message-id=<[email protected].l ocal>
    Aug 6 13:15:43 dreadnought postfix/qmgr[100]: 0BBFE8C5170: from=<[email protected]>, size=29969, nrcpt=1 (queue active)
    Aug 6 13:15:43 dreadnought postfix/smtpd[14471]: disconnect from localhost[127.0.0.1]
    Aug 6 13:15:43 dreadnought postfix/smtp[14466]: 39D248C515F: to=<[email protected]>, relay=127.0.0.1[127.0.0.1]:10024, delay=0.85, delays=0.01/0/0.01/0.83, dsn=2.0.0, status=sent (250 2.0.0 Ok: queued as 0BBFE8C5170)
    Aug 6 13:15:43 dreadnought postfix/qmgr[100]: 39D248C515F: removed
    Aug 6 13:15:43 dreadnought postfix/pipe[14473]: 0BBFE8C5170: to=<[email protected]>, relay=cyrus, delay=0.23, delays=0.01/0/0/0.21, dsn=2.0.0, status=sent (delivered via cyrus service)
    Aug 6 13:15:43 dreadnought postfix/qmgr[100]: 0BBFE8C5170: removed
    ...and the upstream mail filter (sonar in this case, our virus and spam filter) corroborates that an email arrived as well. But, as I said, the user (me) +never got this email+. Numerous other users report the same experience. Some mails arriving at our mail server simply never get put into inboxes and disappear. Poof.
    The only lead I have is multiple instances of the following message in the console at (roughly) the same time as the mail arrival failure:
    *6/08/08 1:15:00 PM com.apple.launchd[1] (0x11a220.cron[14553]) Could not setup Mach task special port 9: (os/kern) no access.*
    I have taken to rebooting the server twice daily to reduce the frequency of these occurrences. The machine is a G5 XServe hosting 700 accounts in a School.

    In the log example quoted above, ignore all the http:// references. Apple's discussions software is assuming the square brackets in the log mean "insert URL"

  • Some Thoughts On An OWB Performance/Testing Framework

    Hi all,
    I've been giving some thought recently to how we could build a performance tuning and testing framework around Oracle Warehouse Builder. Specifically, I'm looking at was in which we can use some of the performance tuning techniques described in Cary Millsap/Jeff Holt's book "Optimizing Oracle Performance" to profile and performance tune mappings and process flows, and to use some of the ideas put forward in Kent Graziano's Agile Methods in Data Warehousing paper http://www.rmoug.org/td2005pres/graziano.zip and Steven Feuernstein's utPLSQL project http://utplsql.sourceforge.net/ to provide an agile/test-driven way of developing mappings, process flows and modules. The aim of this is to ensure that the mappings we put together are as efficient as possible, work individually and together as expected, and are quick to develop and test.
    At the moment, most people's experience of performance tuning OWB mappings is firstly to see if it runs set-based rather than row-based, then perhaps to extract the main SQL statement and run an explain plan on it, then check to make sure indexes etc are being used ok. This involves a lot of manual work, doesn't factor in the data available from the wait interface, doesn't store the execution plans anywhere, and doesn't really scale out to encompass entire batches of mapping (process flows).
    For some background reading on Cary Millsap/Jeff Holt's approach to profiling and performance tuning, take a look at http://www.rittman.net/archives/000961.html and http://www.rittman.net/work_stuff/extended_sql_trace_and_tkprof.htm. Basically, this approach traces the SQL that is generated by a batch file (read: mapping) and generates a file that can be later used to replay the SQL commands used, the explain plans that relate to the SQL, details on what wait events occurred during execution, and provides at the end a profile listing that tells you where the majority of your time went during the batch. It's currently the "preferred" way of tuning applications as it focuses all the tuning effort on precisely the issues that are slowing your mappings down, rather than database-wide issues that might not be relevant to your mapping.
    For some background information on agile methods, take a look at Kent Graziano's paper, this one on test-driven development http://c2.com/cgi/wiki?TestDrivenDevelopment , this one http://martinfowler.com/articles/evodb.html on agile database development, and the sourceforge project for utPLSQL http://utplsql.sourceforge.net/. What this is all about is having a development methodology that builds in quality but is flexible and responsive to changes in customer requirements. The benefit of using utPLSQL (or any unit testing framework) is that you can automatically check your altered mappings to see that they still return logically correct data, meaning that you can make changes to your data model and mappings whilst still being sure that it'll still compile and run.
    Observations On The Current State of OWB Performance Tuning & Testing
    At present, when you build OWB mappings, there is no way (within the OWB GUI) to determine how "efficient" the mapping is. Often, when building the mapping against development data, the mapping executes quickly and yet when run against the full dataset, problems then occur. The mapping is built "in isolation" from its effect on the database and there is no handy tool for determining how efficient the SQL is.
    OWB doesn't come with any methodology or testing framework, and so apart from checking that the mapping has run, and that the number of rows inserted/updated/deleted looks correct, there is nothing really to tell you whether there are any "logical" errors. Also, there is no OWB methodology for integration testing, unit testing, or any other sort of testing, and we need to put one in place. Note - OWB does come with auditing, error reporting and so on, but there's no framework for guiding the user through a regime of unit testing, integration testing, system testing and so on, which I would imagine more complete developer GUIs come with. Certainly there's no built in ability to use testing frameworks such as utPLSQL, or a part of the application that let's you record whether a mapping has been tested, and changes the test status of mappings when you make changes to ones that they are dependent on.
    OWB is effectively a code generator, and this code runs against the Oracle database just like any other SQL or PL/SQL code. There is a whole world of information and techniques out there for tuning SQL and PL/SQL, and one particular methodology that we quite like is the Cary Millsap/Jeff Holt "Extended SQL Trace" approach that uses Oracle diagnostic events to find out exactly what went on during the running of a batch of SQL commands. We've been pretty successful using this approach to tune customer applications and batch jobs, and we'd like to use this, together with the "Method R" performance profiling methodology detailed in the book "Optimising Oracle Performance", as a way of tuning our generated mapping code.
    Whilst we want to build performance and quality into our code, we also don't want to overburden developers with an unwieldy development approach, because what we'll know will happen is that after a short amount of time, it won't get used. Given that we want this framework to be used for all mappings, it's got to be easy to use, cause minimal overhead, and have results that are easy to interpret. If at all possible, we'd like to use some of the ideas from agile methodologies such as eXtreme Programming, SCRUM and so on to build in quality but minimise paperwork.
    We also recognise that there are quite a few settings that can be changed at a session and instance level, that can have an effect on the performance of a mapping. Some of these include initialisation parameters that can change the amount of memory assigned to the instance and the amount of memory subsequently assigned to caches, sort areas and the like, preferences that can be set so that indexes are preferred over table scans, and other such "tweaks" to the Oracle instance we're working with. For reference, the version of Oracle we're going to use to both run our code and store our data is Oracle 10g 10.1.0.3 Enterprise Edition, running on Sun Solaris 64-bit.
    Some initial thoughts on how this could be accomplished
    - Put in place some method for automatically / easily generating explain plans for OWB mappings (issue - this is only relevant for mappings that are set based, and what about pre- and post- mapping triggers)
    - Put in place a method for starting and stopping an event 10046 extended SQL trace for a mapping
    - Put in place a way of detecting whether the explain plan / cost / timing for a mapping changes significantly
    - Put in place a way of tracing a collection of mappings, i.e. a process flow
    - The way of enabling tracing should either be built in by default, or easily added by the OWB developer. Ideally it should be simple to switch it on or off (perhaps levels of event 10046 tracing?)
    - Perhaps store trace results in a repository? reporting? exception reporting?
    at an instance level, come up with some stock recommendations for instance settings
    - identify the set of instance and session settings that are relevant for ETL jobs, and determine what effect changing them has on the ETL job
    - put in place a regime that records key instance indicators (STATSPACK / ASH) and allows reports to be run / exceptions to be reported
    - Incorporate any existing "performance best practices" for OWB development
    - define a lightweight regime for unit testing (as per agile methodologies) and a way of automating it (utPLSQL?) and of recording the results so we can check the status of dependent mappings easily
    other ideas around testing?
    Suggested Approach
    - For mapping tracing and generation of explain plans, a pre- and post-mapping trigger that turns extended SQL trace on and off, places the trace file in a predetermined spot, formats the trace file and dumps the output to repository tables.
    - For process flows, something that does the same at the start and end of the process. Issue - how might this conflict with mapping level tracing controls?
    - Within the mapping/process flow tracing repository, store the values of historic executions, have an exception report that tells you when a mapping execution time varies by a certain amount
    - get the standard set of preferred initialisation parameters for a DW, use these as the start point for the stock recommendations. Identify which ones have an effect on an ETL job.
    - identify the standard steps Oracle recommends for getting the best performance out of OWB (workstation RAM etc) - see OWB Performance Tips http://www.rittman.net/archives/001031.html and Optimizing Oracle Warehouse Builder Performance http://www.oracle.com/technology/products/warehouse/pdf/OWBPerformanceWP.pdf
    - Investigate what additional tuning options and advisers are available with 10g
    - Investigate the effect of system statistics & come up with recommendations.
    Further reading / resources:
    - Diagnosing Performance Problems Using Extended Trace" Cary Millsap
    http://otn.oracle.com/oramag/oracle/04-jan/o14tech_perf.html
    - "Performance Tuning With STATSPACK" Connie Dialeris and Graham Wood
    http://www.oracle.com/oramag/oracle/00-sep/index.html?o50tun.html
    - "Performance Tuning with Statspack, Part II" Connie Dialeris and Graham Wood
    http://otn.oracle.com/deploy/performance/pdf/statspack_tuning_otn_new.pdf
    - "Analyzing a Statspack Report: A Guide to the Detail Pages" Connie Dialeris and Graham Wood
    http://www.oracle.com/oramag/oracle/00-nov/index.html?o60tun_ol.html
    - "Why Isn't Oracle Using My Index?!" Jonathan Lewis
    http://www.dbazine.com/jlewis12.shtml
    - "Performance Tuning Enhancements in Oracle Database 10g" Oracle-Base.com
    http://www.oracle-base.com/articles/10g/PerformanceTuningEnhancements10g.php
    - Introduction to Method R and Hotsos Profiler (Cary Millsap, free reg. required)
    http://www.hotsos.com/downloads/registered/00000029.pdf
    - Exploring the Oracle Database 10g Wait Interface (Robin Schumacher)
    http://otn.oracle.com/pub/articles/schumacher_10gwait.html
    - Article referencing an OWB forum posting
    http://www.rittman.net/archives/001031.html
    - How do I inspect error logs in Warehouse Builder? - OWB Exchange tip
    http://www.oracle.com/technology/products/warehouse/pdf/Cases/case10.pdf
    - What is the fastest way to load data from files? - OWB exchange tip
    http://www.oracle.com/technology/products/warehouse/pdf/Cases/case1.pdf
    - Optimizing Oracle Warehouse Builder Performance - Oracle White Paper
    http://www.oracle.com/technology/products/warehouse/pdf/OWBPerformanceWP.pdf
    - OWB Advanced ETL topics - including sections on operating modes, partition exchange loading
    http://www.oracle.com/technology/products/warehouse/selfserv_edu/advanced_ETL.html
    - Niall Litchfield's Simple Profiler (a creative commons-licensed trace file profiler, based on Oracle Trace Analyzer, that displays the response time profile through HTMLDB. Perhaps could be used as the basis for the repository/reporting part of the project)
    http://www.niall.litchfield.dial.pipex.com/SimpleProfiler/SimpleProfiler.html
    - Welcome to the utPLSQL Project - a PL/SQL unit testing framework by Steven Feuernstein. Could be useful for automating the process of unit testing mappings.
    http://utplsql.sourceforge.net/
    Relevant postings from the OTN OWB Forum
    - Bulk Insert - Configuration Settings in OWB
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=291269&tstart=30&trange=15
    - Default Performance Parameters
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=213265&message=588419&q=706572666f726d616e6365#588419
    - Performance Improvements
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=270350&message=820365&q=706572666f726d616e6365#820365
    - Map Operator performance
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=238184&message=681817&q=706572666f726d616e6365#681817
    - Performance of mapping with FILTER
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=273221&message=830732&q=706572666f726d616e6365#830732
    - Poor mapping performance
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=275059&message=838812&q=706572666f726d616e6365#838812
    - Optimizing Mapping Performance With OWB
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=269552&message=815295&q=706572666f726d616e6365#815295
    - Performance of mapping with FILTER
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=273221&message=830732&q=706572666f726d616e6365#830732
    - Performance of the OWB-Repository
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=66271&message=66271&q=706572666f726d616e6365#66271
    - One large JOIN or many small ones?
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=202784&message=553503&q=706572666f726d616e6365#553503
    - NATIVE PL SQL with OWB9i
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=270273&message=818390&q=706572666f726d616e6365#818390
    Next Steps
    Although this is something that I'll be progressing with anyway, I'd appreciate any comment from existing OWB users as to how they currently perform performance tuning and testing. Whilst these are perhaps two distinct subject areas, they can be thought of as the core of an "OWB Best Practices" framework and I'd be prepared to write the results up as a freely downloadable whitepaper. With this in mind, does anyone have an existing best practices for tuning or testing, have they tried using SQL trace and TKPROF to profile mappings and process flows, or have you used a unit testing framework such as utPLSQL to automatically test the set of mappings that make up your project?
    Any feedback, add it to this forum posting or send directly through to me at [email protected]. I'll report back on a proposed approach in due course.

    Hi Mark,
    interesting post, but I think you may be focusing on the trees, and losing sight of the forest.
    Coincidentally, I've been giving quite a lot of thought lately to some aspects of your post. They relate to some new stuff I'm doing. Maybe I'll be able to answer in more detail later, but I do have a few preliminary thoughts.
    1. 'How efficient is the generated code' is a perennial topic. There are still some people who believe that a code generator like OWB cannot be in the same league as hand-crafted SQL. I answered that question quite definitely: "We carefully timed execution of full-size runs of both the original code and the OWB versions. Take it from me, the code that OWB generates is every bit as fast as the very best hand-crafted and fully tuned code that an expert programmer can produce."
    The link is http://www.donnapkelly.pwp.blueyonder.co.uk/generated_code.htm
    That said, it still behooves the developer to have a solid understanding of what the generated code will actually do, such as how it will take advantage of indexes, and so on. If not, the developer can create such monstrosities as lookups into an un-indexed field (I've seen that).
    2. The real issue is not how fast any particular generated mapping runs, but whether or not the system as a whole is fit for purpose. Most often, that means: does it fit within its batch update window? My technique is to dump the process flow into Microsoft Project, and then to add the timings for each process. That creates a Critical Path, and then I can visually inspect it for any bottleneck processes. I usually find that there are not more than one or two dogs. I'll concentrate on those, fix them, and re-do the flow timings. I would add this: the dogs I have seen, I have invariably replaced. They were just garbage, They did not need tuning at all - just scrapping.
    Gee, but this whole thing is minimum effort and real fast! I generally figure that it takes maybe a day or two (max) to soup up system performance to the point where it whizzes.
    Fact is, I don't really care whether there are a lot of sub-optimal processes. All I really care about is performance of the system as a whole. This technique seems to work for me. 'Course, it depends on architecting the thing properly in the first place. Otherwise, no amount of tuning of going to help worth a darn.
    Conversely (re. my note about replacing dogs) I do not think I have ever tuned a piece of OWB-generated code. Never found a need to. Not once. Not ever.
    That's not to say I do not recognise the value of playing with deployment configuration parameters. Obviously, I set auditing=none, and operating mode=set based, and sometimes, I play with a couple of different target environments to fool around with partitioning, for example. Nonetheless, if it is not a switch or a knob inside OWB, I do not touch it. This is in line with my dictat that you shall use no other tool than OWB to develop data warehouses. (And that includes all documentation!). (OK, I'll accept MS Project)
    Finally, you raise the concept of a 'testing framework'. This is a major part of what I am working on at the moment. This is a tough one. Clearly, the developer must unit test each mapping in a design-model-deploy-execute cycle, paying attention to both functionality and performance. When the developer is satisifed, that mapping will be marked as 'done' in the project workbook. Mappings will form part of a stream, executed as a process flow. Each process flow will usually terminate in a dimension, a fact, or an aggregate. Each process flow will be tested as an integrated whole. There will be test strategies devised, and test cases constructed. There will finally be system tests, to verify the validity of the system as a production-grade whole. (stuff like recovery/restart, late-arriving data, and so on)
    For me, I use EDM (TM). That's the methodology I created (and trademarked) twenty years ago: Evolutionary Development Methodology (TM). This is a spiral methodology based around prototyping cycles within Stage cycles within Release cycles. For OWB, a Stage would consist (say) of a Dimensional update. What I am trying to now is to graft this within a traditional waterfall methodology, and I am having the same difficulties I had when I tried to do it then.
    All suggestions on how to do that grafting gratefully received!
    To sum up, I 'm kinda at a loss as to why you want to go deep into OWB-generated code performance stuff. Jeepers, architect the thing right, and the code runs fast enough for anyone. I've worked on ultra-large OWB systems, including validating the largest data warehouse in the UK. I've never found any value in 'tuning' the code. What I'd like you to comment on is this: what will it buy you?
    Cheers,
    Donna
    http://www.donnapkelly.pwp.blueyonder.co.uk

  • Pro-Active Caching Problems After Change that requires Re-Process of Cube?

    We currently have Pro-Active caching setup at one of our clients to provide a near real-time refresh.
    We are using SQL Server 2012 SP1 Cumulative Update 4.
    We are using Views and a Linked Server to pull data from an Oracle Data Warehouse source.
    The Pro-Active caching is set-up using a Polling Query which polls for changes every 2 minutes and the silence intervals are staggered to try and assist with processing.
    I had significant problems with this setup with Pro-Active caching stopping on several occasions.  What was a cause for concern is that the stoppages have occurred on several occasions without any errors being recorded in the Event Log or Flight Recorder
    to indicate what went wrong.
    This lead to me setting things up as follows to get a stable setup that has been running for several days now :-
    - All Dimensions Pro-Active Cache Refresh - 2 Minute Polling Query (Staggered Silence Interval)
    - All Measure Groups Pro-Active Cache Refresh - 2 Minute Polling Query (Staggered Silence Interval)
    - 2 Measure Groups that had long running Polling Queries had to be scheduled via SQL Agent because no matter they would continuously stop when scheduled via Pro-Active caching.  SQL Agent job processes 2 x Measure Groups.
    However I have an issue?  When I make changes to the underlying database views that do not necessitate a re-process Pro-Active caching runs fine and records the change.
    If I make a change to the solution however that necessitates a re-process of any of the Dimensions and Cubes then I have "HY008 - Operation Cancelled" messages in the Event logs for at up to 6 hours before they then disappear.
    The errors in the Flight Recorder indicating the detail behind these messages say "Server: Proactive caching was cancelled because newer data became available" and "The
    operation was cancelled because of locking conflicts".
    I know the first message can appear because of Late-Arriving Facts but I believe we have dealt with this via the Ignore Errors/Unknown member option and why do these message disappear after 6 hours and everything appears stable again?
    I appreciate anyone's thought on this?
    Is Pro-Active caching stable in the version of SQL Server 2012 we are using?

    Never mind.  I have solved this myself.  The system has been a lot more stable since retiring the SQL agent Job that did the manual processing of the measure groups and switching to Pro-Active caching instead using MOLAP caching only (ensuring
    no ROLAP was used).

  • HOLAP - using OBIEE and Oracle OLAP

    Hi
    How to implement HOLAP in OBIEE (with Oracle OLAP cubes) as below –
    1. For an aggregate table, there are 3 dimensions each one having 4 levels within its hierarchy.
    2. We create an aggregate table at the lowest level of each dimension.
    3. We create a cube for rest of the levels and its combination for the same measure.
    Can we implement this and how?
    Thanks

    Thanks Nazar for your reply.
    On Cube based MV usage - The idea is to have lowest level summaries in relational. Higheer level in cube based on MV. So in case there are queries that can get rewritten , they will retrieve data from the cube itself.
    On hierarchies -
    If we take the example of oracle provided GLOBAL application and check the product dimension -
    ========================================================================
    CREATE TABLE "GLOBAL"."PRODUCT_DIM"
    (     "ITEM_ID" VARCHAR2(12 BYTE) NOT NULL ENABLE,
         "ITEM_DSC" VARCHAR2(31 BYTE) NOT NULL ENABLE,
         "ITEM_DSC_FRENCH" VARCHAR2(60 BYTE),
         "ITEM_DSC_DUTCH" VARCHAR2(60 BYTE),
         "ITEM_PACKAGE" VARCHAR2(20 BYTE),
         "ITEM_MARKETING_MANAGER" VARCHAR2(20 BYTE),
         "ITEM_BUYER" VARCHAR2(20 BYTE),
         "FAMILY_ID" VARCHAR2(4 BYTE) NOT NULL ENABLE,
         "FAMILY_DSC" VARCHAR2(20 BYTE) NOT NULL ENABLE,
         "FAMILY_DSC_DUTCH" VARCHAR2(60 BYTE),
         "CLASS_ID" VARCHAR2(4 BYTE) NOT NULL ENABLE,
         "CLASS_DSC" VARCHAR2(15 BYTE) NOT NULL ENABLE,
         "CLASS_DSC_FRENCH" VARCHAR2(60 BYTE),
         "CLASS_DSC_DUTCH" VARCHAR2(60 BYTE),
         "TOTAL_ID" VARCHAR2(15 BYTE) NOT NULL ENABLE,
         "TOTAL_DSC" VARCHAR2(15 BYTE) NOT NULL ENABLE,
         "TOTAL_DSC_FRENCH" VARCHAR2(30 BYTE),
         "TOTAL_DSC_DUTCH" VARCHAR2(30 BYTE),
         CONSTRAINT "PRODUCT_DIM_PK" PRIMARY KEY ("ITEM_ID")
    USING INDEX PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
    STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
    TABLESPACE "USERS" ENABLE
    ) SEGMENT CREATION IMMEDIATE
    PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING
    STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
    PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
    TABLESPACE "USERS" ;
    ========================================================================
    Now , if this was TYPE 2 dimension and if you had possibility of late arriving sales data where you need to pick up "point in time" attributes of the product. How can I model dimension hierarchy in that case in OLAP?
    In the Oracle example, they have kept it simple TYPE 1 so ITEM_ID can be kept as the key. However, if I had the PRODUCT_DIM as a TYPE 2 dimension table and i was using sequence generated surrogate key as the PK of the dimension table, how do i model it in OLAP?
    Thanks

  • About "the hospital for the error records"...

    Hi folks,
    I am new to ODI, and I am not buying the idea of "the hospital for the error records". In my personal opinion it introduces many complications. If some dimension records are "dirty" and we put them in the error table, then we are splitting the natural relation of the data, aren't we. The DWH data won't correspond to the source anymore, the dimension's data will be artificially split (the error records could apear on the next day), and we will need an additional process which should update the corresponding fact records on the next day (late arriving changes for the existing facts). Sounds much more complicated, than just to allow the ETL process to fail, fix the issue and start it again.
    Experts, please, help with an opinion.
    Thanks in advance!

    @DecaXD, thanks for your comment.
    "after that under my opinion "something is better than none"" This is exactly what I am worried about. The whole "hospital" idea, is something, which makes the whole process more complicated, without clear benefits. I like the idea to have E$ tables, but just for debugging purposes, just to be able to identify quickly, what were the error rows, so I can do a investigation based on failed data. What, I don't like is the idea that we can EASILY reuse the error data to recover the DWH state later.
    "Are you sure that you want to stop ALL your datawarehouse because the Address field is null?"Well, if during the analysis phase (especially, the data profiling step), we missed the fact that the address table could be null, then yes, I would like to stop the whole ETL, because this is situation, which has not been addressed correctly during the design of the ETL process.
    A rithorical question: Are you going to keep recovering the null records after each run (even a "successfull" one), or you would fix the interface/mapping to allow nulls, which would be updated on subsequent runs naturally ?
    What about more complicated error rows, like missing dimension key, which relates to many fact records ? Should we put the related fact records in their E$ table either. What if we have a snowflake design with many references to/from the error dimension records. Sounds too complicated for me.
    I would just store the error rows, and stop the ETL. Note, that I allow for dellay, but don't publish any not consistent data in the DWH.
    Bellow is a quote from the book you mentioned:
    "Don’t overuse the reject file! Reject files are notorious for being dumping
    grounds for data we’ll deal with later. When records wind up in the reject file,
    unless they are processed before the next major load step is allowed to run to
    completion, the data warehouse and the production system are out of sync."
    I am looking for more arguments, I do understand that it is not a simple issue, but I would like to see your real life experience here..
    Thanks!
    Edited by: hayrabedian on 2013-4-29 14:01
    Edited by: hayrabedian on 2013-4-29 14:02

  • Is there ANY possible way to back up my Macbook (2009, running on OSX 10.5.8) using Time Capsule using a cable/not using the internet? I'm willing to come in-store to do this if necessary!

    Time Capsule/Time Machine were working fine until about 1-2 years ago when my family started having troubles with the internet provider (AT&T). While on a 2 hr long phone conversation with them trying to fix it, they changed the wireless key code or whatever so Time Capsule could no longer connect to the network (icing on the cake - they didn't even fix the issue). And it had been so long since I had initially set up Time Capsule that I didn't know how to set it up again.
    Situation is a bit more complicated now because I'm living on campus in another state, and the campus wifi is very slow/goes in and out a lot. They don't recommend connecting routers to their network bc they say it will make it even worse.
    It just makes me really nervous that I haven't backed up my laptop in a couple years. I have thousands of priceless photos and years of hard work that I would very much like to keep safe. I paid all this money to have an external drive and am dying to continue using it! It just doesn't make sense that I can only use it wirelessly - there must be another way. I'm desperate to back up my laptop and will come to Genius bar if necessary to resolve this issue!
    PLEASE HELP!!!

    This can be done easily with ethernet.
    Please follow the instructions strictly.
    To make it easier I want you to do this overnight so you can turn off all your current connection to the internet.
    Just go to the airport fan in to the top right and turn airport off.
    Get ethernet cable and connect the laptop to the TC lan port.. ie <-> ones.
    Press and hold the reset on the TC for about 10sec. until the front led flashes rapidly.
    Open the airport utility.. go to manual setup and change the wireless to off. (so other people around you cannot join your network of one).
    Ignore all the errors.. they won't stop the backup working.
    Go to the TM and reselect the backup target disk as the TC.
    It should start after 2min and run through to completion.
    That is it.. for a backup of many GB it might take a few hours.. so make sure the laptop has power plugged in and the sleep is off.. (on early ones I think this is needed but I am a late arrival to the scene).. sleep doesn't affect later OS.

  • Some songs on iTunes have swears in them but are not marked with an "EXPLICIT" sign. Is there a possible way for you to fix this? And is there some way for me to send a list of songs that I have come across so that customers know what they're buying?

    Here is a list of songs on iTunes that I have bought that need to be marked with an explicit sign:
    Maroon 5:
    "Harder to Breathe"-Songs About Jane
    Goo Goo Dolls:
    "Don't Beat My *** (With a Baseball Bat)"-Goo Goo Dolls
    "Up Yours"-Jed
    "James Dean"-Jed
    "There You Are"-Hold Me Up
    "Out of the Red"-Hold Me Up
    "Lucky Star"-Superstar Car Wash
    "Close Your Eyes"-Superstar Car Wash
    "Only One"-A Boy Named Goo
    "Cuz You're Gone/1000 Words"-Live in Buffalo, July 4th
    "Slide"-Live in Buffalo, July 4th
    Avril Lavigne:
    "Push"-Goodbye Lullaby
    "Wish You Were Here"-Goodbye Lullaby
    "Smile"-Goodbye Lullaby
    "My Happy Ending"-Under My Skin
    Aerosmith:
    "Mama Kin"-Aerosmith
    The Cult:
    "Lucifer"-Choice of Weapon
    Green Day:
    "Burnout"-Dookie
    "Having a Blast"-Dookie
    "F.O.D."-Dookie
    "Brain Stew"-Insomniac
    "Jaded"-Insomniac
    Linkin Park:
    "Wretches & Kings"-A Thousand Suns
    P!nk:
    "Can't Take Me Home"-Can't Take Me Home
    Here's one song that SHOULDN'T have a sign on it:
    "Before the Lobotomy"-Green Day

    This can be done easily with ethernet.
    Please follow the instructions strictly.
    To make it easier I want you to do this overnight so you can turn off all your current connection to the internet.
    Just go to the airport fan in to the top right and turn airport off.
    Get ethernet cable and connect the laptop to the TC lan port.. ie <-> ones.
    Press and hold the reset on the TC for about 10sec. until the front led flashes rapidly.
    Open the airport utility.. go to manual setup and change the wireless to off. (so other people around you cannot join your network of one).
    Ignore all the errors.. they won't stop the backup working.
    Go to the TM and reselect the backup target disk as the TC.
    It should start after 2min and run through to completion.
    That is it.. for a backup of many GB it might take a few hours.. so make sure the laptop has power plugged in and the sleep is off.. (on early ones I think this is needed but I am a late arrival to the scene).. sleep doesn't affect later OS.

  • ORA-13193: failed to allocate space for geometry

    I am having problems indexing a point layer using Locator in 8.1.6. The errors are:
    ERROR at line 1:
    ORA-29855: error occurred in the execution of ODCIINDEXCREATE routine
    ORA-13200: internal error [ROWID:AAAFs9AADAAAB30AAg] in spatial indexing.
    ORA-13206: internal error [] while creating the spatial index
    ORA-13193: failed to allocate space for geometry
    ORA-06512: at "MDSYS.SDO_INDEX_METHOD", line 7
    ORA-06512: at line 1
    ORA-06512: at "MDSYS.GEOCODER_HTTP", line 168
    ORA-06512: at line 1
    The table has 2 columns:
    POSTCODE VARCHAR2(9)
    LOCATION MDSYS.SDO_GEOMETRY
    I loaded the point data with sqlloader rather than geocoding, so this could be the root of my problem, maybe I've missed something obvious:
    POSTCODE
    LOCATION(SDO_GTYPE, SDO_SRID, SDO_POINT(X, Y, Z), SDO_ELEM_INFO, SDO_ORDINATES)
    AB15 8SA
    SDO_GEOMETRY(1, NULL, SDO_POINT_TYPE(3858, 8075, NULL), NULL, NULL)
    AB15 8SB
    SDO_GEOMETRY(1, NULL, SDO_POINT_TYPE(3864, 8077, NULL), NULL, NULL)
    AB15 8SD
    SDO_GEOMETRY(1, NULL, SDO_POINT_TYPE(3867, 8083, NULL), NULL, NULL)
    I am trying to index using the following command, this is when I got the error:
    SQL> execute geocoder_http.setup_locator_index('POSTCODE_POINTS', 'LOCATION');
    User_sdo_geom_metadata has some metadata inserted with lat/long dimensions. My data is not in lat/long, so I updated this but still get the same error message.
    Is it possible to use locator with data that's not in lat/long format?
    Anyone got any ideas?
    Thanks.

    1) If Locator needs lat/long coordinates, can anyone suggest a good route to convert coordinates for example from British National Grid to lat/long?
    2) Are there plans for Locator to be able to use data from other coordinate systems like Spatial can?

  • IP SLA Statistics Reliable?

    Hi,
    I have 2 routers configured with IP SLA udp jitter monitoring both as source and destination. They are connected over a private fiber link. But the statistics has never been of any similiarity. One constantly shows a large number of packet loss, while the other shows little. Can one IP SLA operation send that large number of packets?  Can someone explain this please?
    Thanks in advance.
    3845 is configured as:
    ip sla responder
    ip sla 1
    udp-jitter 192.168.176.50 17000 codec g729a
    tos 184
    ip sla schedule 1 life forever start-time now
    1841 is configured as:
    ip sla responder
    ip sla 1
    udp-jitter 192.168.176.51 17000 codec g729a
    tos 184
    ip sla schedule 1 life forever start-time now
    ==========================================
    on 3845:
    sh ip sla stati 1
    Round Trip Time (RTT) for       Index 1
            Latest RTT: 4 milliseconds
    Latest operation start time: 13:33:54.880 NZDT Tue Sep 28 2010
    Latest operation return code: OK
    RTT Values:
            Number Of RTT: 999              RTT Min/Avg/Max: 3/4/37 milliseconds
    Latency one-way time:
            Number of Latency one-way Samples: 999
            Source to Destination Latency one way Min/Avg/Max: 3/3/36 milliseconds
            Destination to Source Latency one way Min/Avg/Max: 1/0/2 milliseconds
    Jitter Time:
            Number of Jitter Samples: 997
            Source to Destination Jitter Min/Avg/Max: 1/1/32 milliseconds
            Destination to Source Jitter Min/Avg/Max: 1/1/2 milliseconds
    Packet Loss Values:
            Loss Source to Destination: 1           Loss Destination to Source: 0
            Out Of Sequence: 0      Tail Drop: 0    Packet Late Arrival: 0
    Voice Score Values:
            Calculated Planning Impairment Factor (ICPIF): 11
    MOS score: 4.06
    Number of successes: 59
    Number of failures: 0
    Operation time to live: Forever
    On 1841:
    sh ip sla stati 1
    IPSLAs Latest Operation Statistics
    IPSLA operation id: 1
            Latest RTT: 4 milliseconds
    Latest operation start time: 13:34:16.498 NZDT Tue Sep 28 2010
    Latest operation return code: Over threshold
    RTT Values:
            Number Of RTT: 999              RTT Min/Avg/Max: 3/4/7 milliseconds
    Latency one-way time:
            Number of Latency one-way Samples: 510
            Source to Destination Latency one way Min/Avg/Max: 1/1/2 milliseconds
            Destination to Source Latency one way Min/Avg/Max: 3/3/6 milliseconds
    Jitter Time:
            Number of SD Jitter Samples: 509
            Number of DS Jitter Samples: 508
            Source to Destination Jitter Min/Avg/Max: 0/122189/62193664 milliseconds
            Destination to Source Jitter Min/Avg/Max: 0/1/3 milliseconds
    Packet Loss Values:
            Loss Source to Destination: 4294966376          Loss Destination to Source: 716
            Out Of Sequence: 233    Tail Drop: 4294966836
            Packet Late Arrival: 0  Packet Skipped: 0
    Voice Score Values:
            Calculated Planning Impairment Factor (ICPIF): 11
    MOS score: 4.06
    Number of successes: 55
    Number of failures: 0
    Operation time to live: Forever

    Looks like 2 different IOS version judging by the output differences, but output should be accurate.
    Here is the white papaer for udp jitter:
    http://www.cisco.com/en/US/prod/collateral/iosswrel/ps6537/ps6555/ps6602/prod_white_paper0900aecd804fb392.pdf
    Looks like the 1841 is reporting a lot more drops to be sure.
    However the MOS score and impairment factors are the same for both directions.
    I would try clearing the counters and see if the drops reappear, if not, possibly some transient issue
    led to this and the results are cumulitive.
    Command refernce for this output is here:
    http://www.cisco.com/en/US/partner/docs/ios/ipsla/command/reference/sla_04.html#wp1074642
    Hope this helps.

  • IP SLA icmp jitter operation

    Hello
    I would like to track icmp jitter for end host. I verified in documentation that it can be any host as a destination. But i got error on this operation:
     Latest RTT: NoConnection/Busy/Timeout
    I verified that there is no firewall between the source and destination and icmp timestamp request works when done manually:
    r01#ping          
    Protocol [ip]:     
    Target IP address: 10.23.33.6
    Repeat count [5]: 
    Datagram size [100]: 
    Timeout in seconds [2]: 
    Extended commands [n]: y
    Source address or interface: 
    Type of service [0]: 
    Set DF bit in IP header? [no]: 
    Validate reply data? [no]: 
    Data pattern [0xABCD]: 
    Loose, Strict, Record, Timestamp, Verbose[none]: Timestamp
    Number of timestamps [ 9 ]: 
    Loose, Strict, Record, Timestamp, Verbose[TV]: 
    Sweep range of sizes [n]: 
    Type escape sequence to abort.
    Sending 5, 100-byte ICMP Echos to 10.23.33.6, timeout is 2 seconds:
    Packet has IP options:  Total option bytes= 40, padded length=40
     Timestamp: Type 0.  Overflows: 0 length 40, ptr 5
      >>Current pointer<<
      Time= 01:00:00.000 CET (00000000)
      Time= 01:00:00.000 CET (00000000)
      Time= 01:00:00.000 CET (00000000)
      Time= 01:00:00.000 CET (00000000)
      Time= 01:00:00.000 CET (00000000)
      Time= 01:00:00.000 CET (00000000)
      Time= 01:00:00.000 CET (00000000)
      Time= 01:00:00.000 CET (00000000)
      Time= 01:00:00.000 CET (00000000)
    Reply to request 0 (4 ms).  Received packet has no options
    Reply to request 1 (4 ms).  Received packet has no options
    Reply to request 2 (1 ms).  Received packet has no options
    Reply to request 3 (1 ms).  Received packet has no options
    Reply to request 4 (1 ms).  Received packet has no options
    Success rate is 100 percent (5/5), round-trip min/avg/max = 1/2/4 ms
    r01#sh ip sla statistics 196
    IPSLAs Latest Operation Statistics
    IPSLA operation id: 196
    Type of operation: icmp-jitter
            Latest RTT: NoConnection/Busy/Timeout
    Latest operation start time: 12:45:21.019 CET Fri Nov 21 2014
    Latest operation return code: Timeout
    RTT Values:
            Number Of RTT: 0                RTT Min/Avg/Max: 0/0/0 
    Latency one-way time:
            Number of Latency one-way Samples: 0
            Source to Destination Latency one way Min/Avg/Max: 0/0/0 
            Destination to Source Latency one way Min/Avg/Max: 0/0/0 
    Jitter Time:
            Number of SD Jitter Samples: 0
            Number of DS Jitter Samples: 0
            Source to Destination Jitter Min/Avg/Max: 0/0/0 
            Destination to Source Jitter Min/Avg/Max: 0/0/0 
    Packet Late Arrival: 0
    Out Of Sequence: 0
            Source to Destination: 0        Destination to Source 0
            In both Directions: 0
    Packet Skipped: 0       Packet Unprocessed: 0
    Packet Loss: 0
            Loss Period Length Min/Max: 0/0
    Number of successes: 0
    Number of failures: 34
    ip sla 197
     icmp-jitter 10.23.33.6
     frequency 30
    ip sla schedule 197 life forever start-time now
    Nov 21 12:57:43: IP SLAs(197) Scheduler: saaSchedulerEventWakeup
    Nov 21 12:57:43: IP SLAs(197) Scheduler: Starting an operation
    Nov 21 12:57:43: IP SLAs(197) icmpjitter operation: Starting icmpjitter operation
    Nov 21 12:57:49: IP SLAs(197) icmpjitter operation: Timeout
    Nov 21 12:57:49: IP SLAs(197) icmpjitter operation: Timeout
    Nov 21 12:57:49: IP SLAs(197) Scheduler: Updating result
    Nov 21 12:57:49: IP SLAs(197) Scheduler: start wakeup timer, delay = 24796
    Nov 21 12:57:49: IP SLAs(197) icmpjitter operation: Timeout
    Nov 21 12:57:49: IP SLAs(197) icmpjitter operation: Timeout
    Nov 21 12:57:49: IP SLAs(197) icmpjitter operation: Timeout
    Nov 21 12:57:49: IP SLAs(197) icmpjitter operation: Timeout
    Nov 21 12:57:49: IP SLAs(197) icmpjitter operation: Timeout
    Nov 21 12:57:49: IP SLAs(197) icmpjitter operation: Timeout
    Nov 21 12:57:49: IP SLAs(197) icmpjitter operation: Timeout
    Nov 21 12:57:49: IP SLAs(197) icmpjitter operation: Timeout
    Nov 21 12:57:49: IP SLAs(197) icmpjitter operation: Timeout
    Any help would be appreciated.

    Hi Jorge
    According to Cisco documentation icmp-jitter should work on any IP Device.
    I have a similar issue.
    1. I can run icmp-jitter successfully to non cisco routers
    2. it fails to run to a generic ip device.
    Imran

  • IP-sla udp-jitter / one-way delay no output

    Hi *,
    i have a question regarding "ip sla udp-jitter".
    On some connectins i get an output for the "show ip sla stat" for the _one-way delay_
    on other links i don't get an output. The Configuration is always the same and the Probes are running.
    NTP is configured but in my opinion the fact weather i get output for the _one-way delay_
    or not depends on the ntp root despersion.
    Is there a max allowed time differances between the two routes ?
    Here one working / one not working output of the same Router but different peers:
    Not working::
    Latest operation return code: OK
    RTT Values:
    Number Of RTT: 100RTT Min/Avg/Max: 11/11/13 milliseconds
    Latency one-way time:
    Number of Latency one-way Samples: 0
    Source to Destination Latency one way Min/Avg/Max: 0/0/0 milliseconds
    Destination to Source Latency one way Min/Avg/Max: 0/0/0 milliseconds
    Working:
    Latest operation return code: OK
    RTT Values:
    Number Of RTT: 100RTT Min/Avg/Max: 12/13/14 milliseconds
    Latency one-way time:
    Number of Latency one-way Samples: 100
    Source to Destination Latency one way Min/Avg/Max: 6/7/8 milliseconds
    Destination to Source Latency one way Min/Avg/Max: 5/6/7 milliseconds
    I hope one of you can help me to find / fix the problem,
    Thanks in advance / Emanuel

    Hi everyone,
    I have the same doubt.
    I did a ip sla configuration on 1841 and 7206VXR and don't show nothing in one-way delay.
    ----------------------7206---------------------
    -ip sla monitor responder
    -ip sla monitor 1
    - type jitter dest-ipaddr 10.9.105.14 dest-port 16384 source-ipaddr 10.8.20.102  codec g711alaw
    - tos 184
    -ip sla monitor schedule 1 start-time now
    -ntp peer 10.9.105.14
    HOST)#show ip sla sta
    Round Trip Time (RTT) for       Index 1
            Latest RTT: 507 milliseconds
    Latest operation start time: 10:57:36.619 UTC Sun Oct 10 2010
    Latest operation return code: OK
    RTT Values:
            Number Of RTT: 1000             RTT Min/Avg/Max: 125/507/846 milliseconds
    Latency one-way time:
            Number of Latency one-way Samples: 0
            Source to Destination Latency one way Min/Avg/Max: 0/0/0 milliseconds
            Destination to Source Latency one way Min/Avg/Max: 0/0/0 milliseconds
    Jitter Time:
            Number of Jitter Samples: 999
            Source to Destination Jitter Min/Avg/Max: 1/1/6 milliseconds
            Destination to Source Jitter Min/Avg/Max: 1/5/23 milliseconds
    Packet Loss Values:
            Loss Source to Destination: 0           Loss Destination to Source: 0
            Out Of Sequence: 0      Tail Drop: 0    Packet Late Arrival: 0
    Voice Score Values:
            Calculated Planning Impairment Factor (ICPIF): 17
            Mean Opinion Score (MOS): 3.84
    Number of successes: 38
    Number of failures: 0
    Operation time to live: 1347 sec
    -------------------------1841-------------------------------
    -ip sla monitor responder
    -ip sla monitor 1
    - type jitter dest-ipaddr 10.8.20.102 dest-port 16384 source-ipaddr 10.9.105.14 codec g711alaw
    - tos 184
    -ip sla monitor schedule 1 start-time now
    -ntp peer 10.8.20.102
    3383)#show ip sla monitor statistic
    Round trip time (RTT)   Index 1
            Latest RTT: 614 ms
    Latest operation start time: 10:50:50.491 UTC Wed Oct 27 2010
    Latest operation return code: OK
    RTT Values
            Number Of RTT: 999
            RTT Min/Avg/Max: 347/614/867 ms
    Latency one-way time milliseconds
            Number of one-way Samples: 0
            Source to Destination one way Min/Avg/Max: 0/0/0 ms
            Destination to Source one way Min/Avg/Max: 0/0/0 ms
    Jitter time milliseconds
            Number of SD Jitter Samples: 997
            Number of DS Jitter Samples: 998
            Source to Destination Jitter Min/Avg/Max: 0/6/19 ms
            Destination to Source Jitter Min/Avg/Max: 0/1/3 ms
    Packet Loss Values
            Loss Source to Destination: 1           Loss Destination to Source: 0
            Out Of Sequence: 0      Tail Drop: 0    Packet Late Arrival: 0
    Voice Score Values
            Calculated Planning Impairment Factor (ICPIF): 20
    MOS score: 3.72
    Number of successes: 32
    Number of failures: 0
    Operation time to live: 1668 sec

Maybe you are looking for

  • Unable to Load Portable Collection File

    Hello, I created a Portable Collector and am trying to load this file into my Zenworks 7.5 (R12) but I keep getting the following error message when I use the wizard to import it: "There were problems loading 1 workstation inventory files. For detail

  • Static / Bad Connection for External Bose Speakers?

    Hello Everyone; I recently bought a new 24" iMac and bought a set of Bose Companion speakers to go with it. I've plugged them into the headphones jack using the cable provided by Bose, but my sound quality is really bad. Theres a lot of static in the

  • Receiver FCC: Ignore first field

    Hi Experts, Is it possible to ignore only the first field in receiver fcc? Thanks, Vishal

  • Photoshop CC smart quote Bug

    I'm using the PS CC latest version. I started typeing some copy out for a layout and when I typed a ' photoshop inserted a > so instead of something like "they're" photoshop would substitue "they>re". I had to go into preferences and turn off smartqu

  • Icmp packet utility

    While researching socket programming in C, I stumbled upon a piece of code designed to repetitively send icmp packets to a given target. The concept peaked my interest as a viable stress test for a home router, so i copied and compiled the code to se