Trickle Feed

Hi All Gurus,
Can anyone please explain me how Trickle feeds work in ASO ?
As per my knowledge, it means the data is loaded into slices (buffers) and at the same time the old data is available to users. then we can merge all the slices and load the data to the cube. But my question is , can the users access the data even when the data will be loaded to cube from slices ? I tried looking for this answer on DBAG but was unable to find it.
Thanks

There have been a few posts on this subject in the past, these should give you more information
"trickle-feed" ASO Essbase
ASO trickle feed option
Re: is the essbase suit for near real time data analysis?
The user does not see the slice and will retrieve as normal.
Cheers
John
http://john-goodwin.blogspot.com/

Similar Messages

  • ASO Trickle feed

    Good morning
    I see in the ORACLE literature concerning ASO cubes that it refers to sending slices of data to the cube as "trickle feed". Is this because the send function for the ASO cubes is not nearly as robust as Block storage? If not, what is trickle feed and how is it different from sending data?

    Trickle feed is the (I think) old marketing name for ASO slices. Check out the DBAG, Tech Ref for more about it. Also, Angie Wilcox presented on slices at ODTUG's Kscope12 -- you can download the presentation for free from odtug.com's Technical Resources.
    I believe at one time ASO databases could not have incremental data -- additional data forced clears and full loads. Slices are just a way to load additional data to an already populated ASO cube -- it's an incremental data load.
    There's nothing less robust about ASO sends (which, btw, is a itsy-bitsy slice), but they are different than BSO lock-and-sends. Again, the docs will explain.
    Regards,
    Cameron Lackpour

  • ASO trickle feed option

    Hi All
    Can someone please explain me what is Essbase's trickle feed option in details and how can one use it?
    I'm looking for the ways to build near real-time cubes and heard that potentially 'trickle feed' option introduced in Essbase 9.3.1 could help, but unfortunately there is no any word in manuals regarding this..
    Thanks,
    Dmitry

    I have attempted this in the pass. I'm not sure about the term "trickle feed".
    But the technique that I had the most success with was:
    import database sample.basic data
    from data_string
    '"Sales" "COGS" "Marketing" "Payroll" "Misc" "Opening Inventory" "Additions"
    "Ending Inventory" "100-10" "New York" "Jan" "Actual"
    678 271 94 51 0 2101 644 2067'
    on error abort;
    Your automation process can build this string in real-time. You might want to setup committed access to a BSO cube, just in case you need to have rollback capabilities.
    Brian Chow

  • "trickle-feed" ASO Essbase

    Hi all,
    What is "trickle-feed" ASO Essbase...
    Thanks

    From the Essbase 9.3.1 readme:
    The aggregate storage database model has been enhanced with the following features:
    - An aggregate storage database can contain multiple slices of data.
    - Incremental data loads complete in a length of time that is proportional to the size of the
    incremental data.
    - You can merge all incremental data slices into the main database slice or merge all
    incremental data slices into a single data slice while leaving the main database slice
    unchanged.
    - Multiple data load buffers can exist on a single aggregate storage database. To save time, you
    can load data into multiple data load buffers at the same time.
    - You can atomically replace the contents of a database or the contents of all incremental data
    slices.
    - You can control the share of resources that a load buffer is allowed to use and set properties
    that determine how missing and zero values, and duplicate values, in the data sources are
    processed.
    See the Hyperion Essbase - System 9 Database Administrator's Guide, the Essbase Technical
    Reference, and the Essbase Administration Services Online Help.

  • Trickle-feed Mapping

    Hi gurus,
    Do you have any experience on developing a trickle-feed mapping in OWB 11gR2?
    It'd be appreciated if you could kindly share it!
    Iman

    From the Essbase 9.3.1 readme:
    The aggregate storage database model has been enhanced with the following features:
    - An aggregate storage database can contain multiple slices of data.
    - Incremental data loads complete in a length of time that is proportional to the size of the
    incremental data.
    - You can merge all incremental data slices into the main database slice or merge all
    incremental data slices into a single data slice while leaving the main database slice
    unchanged.
    - Multiple data load buffers can exist on a single aggregate storage database. To save time, you
    can load data into multiple data load buffers at the same time.
    - You can atomically replace the contents of a database or the contents of all incremental data
    slices.
    - You can control the share of resources that a load buffer is allowed to use and set properties
    that determine how missing and zero values, and duplicate values, in the data sources are
    processed.
    See the Hyperion Essbase - System 9 Database Administrator's Guide, the Essbase Technical
    Reference, and the Essbase Administration Services Online Help.

  • OWB Trickle Feed Mode

    Just attempting to replicate some de-que functionality currently encoded in PL/SQL using Oracle Object Types as the transport mechanism for the AQ.
    Following David Allans blog reveals a few issues:
    http://blogs.oracle.com/warehousebuilder/2009/09/owb_11gr2_trickle_feed_data_acquisition_and_delivery.html
    The first is that the Single Subscriber Queues are not being imported by OWB but the Multi Subscriber Queue is, the second issue is that the mapping will not compile using the Multi Q and an error is returned:
    VLD-4257: The real-time driver queue RECON_Q is not a streams queue
    Select a streams queue as the real-time driver queue
    Any ideas? I know this is 11.2 frontier stuff.
    Edited by: NSNO on May 5, 2010 11:10 AM

    Hi Paul
    These are some odds and ends I have found behind the reasoning. There are several limitations using a single consumer queue. Some of them are
    * You cannot add subscriptions (or subscribers) to single-consumer queues.
    * To be able to add recipient and subscriber , the queue must reside in a queue table that is created with the multiple consumer option. Remember that the recipient list over rides the subscriber list and allows producers to control message dissemination.
    * You can propagate messages from a multi consumer queue to a single-consumer queue. The reverse is not possible (as far as I know at least until 11gR1).
    * Consumers of a message in multi consumer queues can be local or remote.
    * Multicast and broadcast models not feasible with single consumer queue.
    * In 10g and later (I think) , Queue monitor removes messages from multiconsumer queues. This allows dequeuers to complete the dequeue operation by not locking the message in the queue table. This got rid of the only sore point for multi consumer queues.
    Single consumer queues are cookie cutter solutions between two applications that exclusively talk to each other and have no use beyond that. The multi consumer queues are more versatile.
    Cheers
    David

  • Loading data using send function in Excel to aggregate storage cube

    Hi there
    just got version 9.3.1. installed. Can finally load to aggregate storage database using excel essbase send. however, very slow, especially when loading many lines of data. Block storage much much faster. Is there any way you can speed up loading to aggreagate storage data base? Or is this an architectural issue and therefore not much can be done?

    As far as I know, it is an architectural issue.. Further, I would expect it to slow down even further if you have numerous people writing back simultaneously because, as I understand it, they are throttling the update process on the server side so a single user is actually 'writing' at a time. At least this is better than earlier versions where other users couldn't even do a read when the database was being loaded; I believe that restriction has been lifted as part of the 'trickle-feed' support (although I haven't tested it)..
    Tim Tow
    Applied OLAP, Inc

  • Best way to keep Essbase data available at all time

    Hi all,
    We are trying to find out the most suitable way of designing our Essbase cubes so that the data is available to user all the time even during data replication and aggregation.
    We may have aggregate the cubes every one hour and cubes will both ASO and BSO.
    Can someone give some direction on this? Can we use trickle feed for ASO and or clustering is a better option? Also what kind of clustering will be best suited for this?
    Any other tip wlll be really helpful.
    Many thanks.

    U do not need aggreate ASO
    becouse all agregates calc automaticaly with user retrive process . But u may have to use material view for store most popular report on storage.
    Please provide more details
    1) What you decision - Budgeting or only Reporting
    2) How many interactive concurent users
    3) How many members in mesures, count of dimensions, count members in dimensions,levels in dimensions
    What are reporting tool u using ?

  • SAP BW 3.52 and POS DM: Architecture Best Practise

    We are retail/wholesale company that are two different business units. Currently we are running SAP BW 3.0 that is integrated with SAP R/3 and APO for reporting purposes. We are now in the early stages of implementing SAP Retail along with SAP POS DM to support the Retail business. We have a question re the architecture and the best practices. We were told that POS DM is integral part of SAP BW 3.52 i.e. a function within BW application. We would like to run the retail instance of SAP BW on a separate box and separate the POS DM onto another box. The reason being, POS DM processes all the store POS transactions through either to IS-Retail or SAP BW. We believe the transaction processing that occurs in POS DM should be separated from SAP BW, in order maintain efficiencies. Can anyone share their perspective and experience on what the best practice would be? Also, is it possible to do what we would like to do i.e. POS DM and BW running on different boxes?
    Please advise.
    Thanks
    Satish Seshayya

    Information about POS DM should come within the next time as a Update of the Solution Manager.
    Some Remarks :
    The best way to get some information is to contact SAP-Sales The most of my customers run POS DM on the same Box as BW.
    A typical process will look like this :
    POS System is mapped via Converter (e.g. XI) to the BAPI of the PIPE (/posdw/bapi_postr_create). Also you could feed the PIPE via IDOCS or via direct input (Proxy).
    You've got some internal tasks running for validation of the data (sequencing, duplicate...) and you can schedule other tasks to supply the R/3 with IDOCs and to write Data to the DeltaQueues where BW can take the Data.
    You can plan everything time-based. Thats meaning as long you don't have a huge amount of data (e.g. >12.000.000 LineItems/day) you should get no big problems with the right hardware.
    Also you're able to do "trickle feed", that's more or less an immediatley processing of the data (e.g. directly or every 2 hours)
    wish u much luck

  • How to replicate data  between Oracle db and SQL server dbs in real time?

    Hello,
    Anyone has idea that what tool we can use to do data replication in real time between Oracle and SQL sever or Oracle and Sybase or Sql server and Sybase?
    This is topic is brought by a project manager?
    I only know Oracle to Oracle dbs by streams or GoldenGate.
    Thanks
    Jerry

    Since GoldenGate's bread and butter was (and is) replicating data between heterogeneous data sources, and since Oracle has purchased GoldenGate, that would seem like a natural place to start.
    Beyond that, it depends on the architecture you want and how you define "real time." Just about any ETL tool on the market, whether Oracle's ODI or OWB or any number of third party products (Informatica, DataStage, etc) can handle "trickle feeds" from various data sources to a database target of your choosing. Different tools will have different sorts of integration with the source database, many will require that a bunch of triggers are created to track changes on the source systems.
    If you want Oracle to control the replication process (which doesn't really make sense if we're talking about replication from a non-Oracle database to another non-Oracle database), you can use the Oracle Transparent Gateway products to create database links from Oracle to the non-Oracle databases and query data on the source database periodically.
    Justin

  • CDC in OWB 11.2

    Hi guys,
    2 question regarding setting up Change Data Capture (CDC).
    1. In OWB 11.2, is using CT Mapping the only way to perform CDC?
    2. Is CCA (Control Ctr Agent) required if I am in all Oracle Environment (CT mappings)? (Source: Ora 10.1, Target 11.2)
    I would like to avoid using CCA if it's not needed.
    Thank you.

    Hi
    As well as the CDC template mapping, there are a bunch of options for CDC including anything from Streams to replicate environments (also OWB supports trickle feed maps for AQs in 11gR2), or you can use your own replication technology such as Oracle Golden gate and simply work off of the replicated tables.
    If you are in an all Oracle environment, code template mappings are not essential, proof being all previous releases did not have them - they do bring a lot of alternatives to the table which let you do more and faster than before.
    Cheers
    David

  • What about data warehousing?

    I have attended the 10g launch, perused the CD I got there and listened to a couple of online seminars. I can easily envision how 10g benefits OLTP applications, but data warehousing seems to be ignored in the presentations.
    Does 10g offer anything over 9i for data warehousing, which has quite different resource demands? Is there any published information I could read?

    DATA WAREHOUSING
    Oracle Database 10g has also enhanced its data warehouse and business intelligence capabilities, which results in
    further reduction of the total cost of ownership while enabling customers to derive more value from their data and
    supporting real time data feeds.
    Consolidation and integration of traditionally disparate business intelligence systems into a single integrated engine is
    further enhanced in Oracle Database 10g. Database size limits have been raised to millions of terabytes. Business
    Intelligence applications can be consolidated alongside transactional applications using Real Application Clusters
    automatic service provisioning to manage resource allocation. This consolidation means analysis can be performed
    directly against operational data and resource utilization can be maximized by reallocating servers to workloads as the
    business needs change. The value of data is increased with the ability to perform even more diverse analytic
    operations against core data with enhanced OLAP analytics, a data mining GUI and a new SQL model feature. The
    SQL model cause allows query results to be treated as sets of multidimensional arrays upon which sophisticated
    interdependent formulas are built. These formulas can be used in complex number-crunching applications such as
    budgets and forecasts without the need to extract the data to a spreadsheet or perform complex joins and unions.
    Real Time Warehousing is enabled either by consolidating business intelligence with operational applications, or by
    new change data capture capabilities based on Oracle Streams which produce low or zero latency trickle feeds with
    integrated ETL processing.
    Joel P�rez

  • ODI DEF

    Hi ,
    Oracle Data Integrator is a comprehensive data integration platform that covers all data integration requirements: from high-volume, high-performance batch loadss, to event-driven, trickle-feed integration processes, to SOA-enabled data services.
    What is batch loads ?
    What is event driven?
    What is trickle-feed integration?
    What is SOA-enabled data services?
    Please explain me as iam new to the ODI environment
    Thanks in advance!

    What is batch loads ?You have data coming in bulk from a system maybe in form of nightly batch of files. These maybe millions of records which are integrated into the system as a batch.
    ODI has support for batch loads of data.
    What is event driven?ODI Wait components can wait for events to happen and then trigger an integration process. Eg. OdiFileWait component can keep scanning a directory for a file and when the file arrives, it can start the integration process. Similarly, odiWatForData can ping a table to see if the number of records have been inserted into it and then initiate an integration process.
    What is trickle-feed integration?ODI has support for Messaging Queues like MQ. More theory can be found [ here| http://technology.amis.nl/blog/2409/the-trickle-feed-integration-pattern ]
    What is SOA-enabled data services?Webservices can be used to perform data integration tasks.
    HTH

  • ODI vs Integration InterConnect

    For designing, implementing and managing interfaces between heterogenous database systems, can someone describe the difference between these two technologies? i.e. the market scenarios each targets, as they seem to overlap.

    Hello,
    I have a good knowledge of Oracle Data Integrator and the Fusion Middleware stack. Forgive me if I am not completely accurate in my analysis of Integration Interconnect.
    My understanding is:
    - Both products work to integrate heterogeneous data systems. I think this is the common point.
    - Integration Interconnect is part of OracleAS. It provides event-driven integration in a hub-a-spoke model, with simple transformations. It relies on Oracle AS and uses XML as the key format.
    - Data Integrator is part of Fusion Middleware. It provides data, events, and service oriented integration, with an ELT architecture (that is : code generation and using existing databases as the transformation engines).
    IMHO, Integration Interconnect is suitable for integrating small volumes of data (trickle-feed) when you have OracleAS, whereas Oracle Data Integrator is better for integrating large volumes of data (batch) when you need complex transformations.
    I think that Integration Interconnect is closer to Oracle ESB. ESB seems to be more complex and comprehensive than Integration Interconnect.
    Best,
    FX

  • Form feed, null and StringTokenizer

    is form feed recognized as null when using the StringTokenizer?
    i currently have my StringTokenizer set with a blank space and i am attempting to read until the value is (EOF) null and while it hasMoreTokens. the text file that i am reading from spans across several pages. my code falls out of my loop when it hits the last space on my first page.

    a form feed (\f) is not a null but that shouldn't be stopping the StringTokenizer, I suspect that the manner in which you're reading in the data may be the culprit. Try to use the BufferedReader class to read your file.
    V.V.

Maybe you are looking for

  • ORA-00942: table or view does not exist show in defaultTrace. n .trc

    When i checked my portal backend trace file today, i saw lots of below error trace: #1.5 #0003BA78185F00400000004400001DE90004464D1AF14829#1203196885279#com.sap.sql.jdbc.direct.DirectPreparedStatement#sap.com/irj#com.sap.sql.jdbc.direct.DirectPrepare

  • 24" iMac (Intel based) does not boot.

    Hi there, My iMac 24" refuses to boot, OS Lion present but no recovery nor anything else. If i try to diagnose the machine, this message pops up "! OUCH !!! : mshdlr loc zero not zero ! 00007007000000000h @:eeeeeeeeeeeeeeeeeeeeh" Tried to replace RAM

  • Microsoft Wireless Laser Mouse 8000 and Leopard/Mac

    I wasn't able to get my Microsoft Wireless Laser Mouse 8000 working under 10.4 but am trying again with Leopard - so far fruitlessly. With the latest version of Intellipoint for Mac (6.2.2) installed, my bluetooth seteup assistant will find the mouse

  • ORA-13773: insufficient privileges to select data from the cursor cache

    We are trying to create STS using the below query: exec sys.dbms_sqltune.create_sqlset(sqlset_name => 'TEST_STS', - sqlset_owner => 'SCOTT'); The below procedure will load sql starting with 'select /*MY_CRITICAL_SQL*/%' from cursor cache into STS TES

  • How to load data into Hierarchy

    Hi Experts,   I have created hierarchy but unable to activate.When ever i click on activate hierarchy it gives me an error as there is no data to load,but when i schedule the data in infopackage i am able to request the data.Even in PSA i can see the