TT concept check . . . requesting a example

I am still learning TT and specifically IMDB to increase the performance of the Oracle Database.
The three concepts that I have recently read about are:
*) DYNAMIC ASYNCHRONOUS WRITETHROUGH
*) Sliding Window Caching
*) Data-Aging Policy
I think that the above three concepts can be used to create a table in TT to support our need to cache the last 36 hours of transactions for our databases. Everything beyond the 36 hours can go to the Oracle database.
Does anyone have any code example to help me confirm or correct my understanding?
Also, how can we warm the cache or do the initial load of this type of sliding window?
I am downloading the Developer Day's VM now and what to be able to practice the concepts.

Yes, caching just the last 36 hours data (or indeed the last 'n' anything) is usually quite straightforward. Sliding Window is a generic term that just means you keep the 'last n something' data in the cache and that this window of data is updated in (near) real time.
For READONLY cache groups, sliding windows is usually accomplished using a WHERE clause in the cache group definition that references some timestamp field in each row that has the date/time that the row was inserted. The WHERE clause specifies to only cache rows where this timestamp is within the required time interval before now (SYSDATE).
For AWT cache groups one uses the aging featiure to manage the sliding windows. You would use aging based on a lifetime again based on a timestamp column in the data. The application must ensure that this timestamp is set properly (SYSDATE) on every row inserted and if rows are updated then the application needs to decide if that should also update the timestamp (depending on what this means to the application).
Note that the aging based on lifetime feature evaluates rows for aging out based on the defined unit of lifetime. So if you define the lifetime as 36 hours rows only become a candidate to be aged out when they are older, rounded up to hours, than the lifetime. So you may have data present that is up to 36 hours 59 minutes and 59 seconds old. Also, note that aging is by design a low priority background activity designed not to impact the application workload so under heavy load it may take aging a little while to remove 'expired' data. So, you need to understand that the cache will typically contain a little more data than that defined by the 'window'.
Chris

Similar Messages

  • Create NSAPI SAF plug-in equivalent to check-request-limits

    Hi Masters,
    GoodDay. Okay, here's the problem:
    I'm a newbie in writing a custom SAF's, & what I'd like to do is to write a SAF which is closer to this one:
    +PathCheck fn="check-request-limits" monitor="$ip" max-rps="10" specifically the check-request-limits fn.+
    I believe source code of the server is not available so the question is:
    How can I get total request of a specific client in a given time (secs for example) [ip->TOTAL_REQUEST_IN_1_SEC]?
    char *ip = pblock_findval("ip", sn->client);
    //NEED HELP HERE
    if (ip->TOTAL_REQUEST_IN_1_SEC < 10 ) {
         //my logic goes here
    }Any help would do! Thanks in advance!

    Source code is out. Check this thread :
    Announcing Open Web Server [http://forums.sun.com/thread.jspa?forumID=759&threadID=5360537|http://forums.sun.com/thread.jspa?forumID=759&threadID=5360537]

  • Auto Spell Check Requested! Or just move the button, please.

    Auto Spell Check Requested! Or just move the button, please.

    Thanks for the suggestion.  We'll make sure to pass it along.
    If a forum member gives an answer you like, give them the Kudos they deserve. If a member gives you the answer to your question, mark the answer that solved your issue as the accepted solution.

  • In RSRT - Is it possible to check request wise data in RSRT only.

    Hi,
    In RSRT - Is it possible to check request wise data in RSRT only.
    Kindly advise me on the same.
    Thanks
    Bujji

    Saveen,
    Here is my problem.
    I have a infocube in which Material No and Base Unit are there.
    I have a DSO in which Material, (Alternate unit, Numerator and Denominator) which are associated to that material are there .
    I need to match the same material number from both infocube and DSO and load the associated Alternate unit, Numerator and denominator in the infocube.
    Since the infocube is non cumulative, I am not able to build Infoset.
    So I added the Infoobjects of (Alternate unit, Numerator and Denominator) to the cube.
    Now the cube has Material no, base unit-------For Both data is filled.
             and extra Alt unit, Numerator and Denominator- For these data empty.
    I need to load the alt unit Numerator and denominator from the DSO for which the Material no matches with the Infocube.
    I am not very good in explanation.Hope u understand. Pls adjust with the long text...
    Pls help me.
    Thanks.
    Guru

  • To check request scoped component in dyn/admin

    How do i check request scoped component in dyn/admin console?
    I have enabled loggingDebug for my formHandler but none of the debug messages appear in the console.
    I need to check the dyn/admin to see if loggingDebug=true.

    1004856 wrote:
    How do i check request scoped component in dyn/admin console?
    Request/Session scoped components will not appears in component browser. You can find them like this:
    [1]. In dyn/admin click on Component Browser [http://localhost:8080/dyn/admin/nucleus]
    [2]. Suppose you want to open CartModifierFormHandler that is request scoped. then directly append full component path in url after config like below:
    http://localhost:8080/dyn/admin/nucleus/atg/commerce/order/purchase/CartModifierFormHandler/
    And Gurvinder already mentioned about enabling loggingDebug for such components
    -RMishra

  • JMS Request/Response example

    Hi
    I am trying to implement a JMS Request/Response example on glassfish, but i am not getting the correct behaviour.
    My code is below. I am sending a message to a queue and adding the setJMSReplyTo another queue. I call the recv.receive(10000); and wait for the messages to be received. But this call blocks the current thread and the MDB that i orginally sent the message to only gets executed after the recv.receive(10000); has timed out after 10 seconds.
    Can someone confirm that my code is correct or am i doing something wrong?
    Connection connection = null;
    Session session = null;
    String text = "hello";
    try {
    System.out.println("Sending " + text);
    connection = searchDestFactory.createConnection();
    session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
    MessageProducer messageProducer = session.createProducer(searchDest);
    TextMessage tm = session.createTextMessage();
    tm.setText(text);
    tm.setJMSReplyTo(destQueue);
    messageProducer.send(tm);
    System.out.println("Sent " + text);
    MessageConsumer recv = session.createConsumer(destQueue);
    connection.start();
    Message m = recv.receive(10000);
    tm = (TextMessage) m;
    if(tm != null) System.out.println(tm.getText());
    else System.out.println("No message replied");
    } catch (JMSException ex) {
    System.out.println(ex);
    Thanks Glen
    Edited by: glen_ on Jun 16, 2008 6:13 AM
    Edited by: glen_ on Jun 16, 2008 6:13 AM
    Edited by: glen_ on Jun 16, 2008 6:14 AM

    Glen,
    I have never attempted to use the messaging service the way you have, namely a single instance as both sender and receiver, but I noticed that you do send the message before you register your Consumer. My first and easiest suggestion would be to simply move your consumer block (I would move both lines) above the producer block and try again.
    If that attempt fails, I would implement a MessageListener, once again before the producer block and allow it to handle received messages (no need for recv.receive(10000);)
    Example:
        public class QueueMessageListener implements MessageListener {
            public void onMessage(Message message) {
                try {
                    System.out.println(String.format("From Glassfish: %s received a %s of type %s.", m_Queue.getQueueName(), message.getClass().getName(), message.getJMSType()));
                    System.out.println(printJMSMessage(message));
                } catch (JMSException ex) {
                //handle message here
        }and somewhere before the producer block:
                m_msgListener = new QueueMessageListener();
                m_msgConsumer =  m_Session.createConsumer(m_Queue);
                m_msgConsumer.setMessageListener(m_msgListener);
                m_Connection.start();I feel like I've done my good dead for the day :)
    -Jerome BG

  • Finance for Wire , Check request ,Fixed Assets for the Capital Expenditure

    Hi,
    We are trying integrate the Finance for Wire , Check request ,Fixed Assets for the Capital Expenditure with adobe forms.
    Where would i find more information regarding the same?
    what tables are used and what are related t-codes which can give more information?
    any documentation is appreciated.
    rgds
    vara

    Hi,
    We are trying integrate the Finance for Wire , Check request ,Fixed Assets for the Capital Expenditure with adobe forms.
    Where would i find more information regarding the same?
    what tables are used and what are related t-codes which can give more information?
    any documentation is appreciated.
    rgds
    vara

  • Process on Help request and Process on value request events examples

    HI All,
               Can anybody please give me some examples of Process on Help request and Process on value request events.
    Thanks in advance

    HI,
    Check programs
    <b>demo_selection_screen_f1</b>.
    <b>demo_selection_screen_f4.</b>
    Regards,
    Sesh

  • Checking requesting cost center to a company code

    Hi All,
    How to check a requesting cost centre which is belong to a company code ?
    Thank you.
    Nies

    You need to use validation along with a user exit.
    User Exit is some where you write the logic that the company code (BUKRS) of the requesting cost center (AKSTL) should be same as the company code on the WBS (PBUKR)
    Take help of ABAPer in writing the logic of the user exit.
    Regards
    Sreenivas
    Pls close the post if satisfied.

  • Request for example KM content structure

    I am looking for an example of an existing portal structure as we speak about the KM folder structure and KM content area. Any example is valuable and helps me thinking in the right direction for developing our own structure for the brand new global website.
    The next question which relates to the above is: what is the most critical thing to take into consideration when developing the km structure?
    any other tips and tricks to this subject are welcome.
    thanks
    Jaap

    Hello Ruturaj,
    Please check the Mobile monitor MEREP_MON for the status of data coming from the backend to the MI server and then to the MI client on Synchronisation and vice-versa.
    On Inserting data via Client and Synchronisation , the corresponding BAPI Wrapper is triggered on the backend and data is updated in SAP.
    Reagrds,
    Himanshu Limaye

  • ASM - Concept - Clarification Request

    Hello All,
    I'm about to go ahead and install ASM for one of my clients. After going through the book ASM - Under the hood, I have a few clarifications, which I hope can be answered by the experts here.
    1- Since ASM uses its our algorithm for mirroring - Can I have an in-pair number of disks in +DATA diskgroup? say 11 disks ?
    2- In regards to Failure Groups, what is concept? Say I have 1 diskgroup +DATA - 4 disks  - does failure groups mean that id Disk 1 goes, then move the primary extents to another disk, say disk 3.
    - Can failure groups be in different diskgroups, lets say failure group for DATA disks, would be disk in RECOVERY ?
    - Or are failure groups additional disks which just sit there and are activated if case of a disk failure
    3- On installation, ASM 10gR2, are there any things a firs timer should watch out for.
    4- Should I have a hot spare disk on a 15 disk array Dell MD1000 - is this really necessary - why? if one disk goes bad, then we can simpy change it. Does this make sense if I have 4 hour gold-support on site with a new disk?
    Thank in advance for any assistance.
    Jan

    1. Yes, ASM will determine the most suitable block mirroring strategy regardless the number of disks in the diskgroup.
    2. Failure groups affect how ASM mirrors blocks across them. By default, each disk is in its own failure group - it is assumed that each disk can fail independently of others. If you assign two different disks to the same failure group, you indicate that they are likely to fail together (for example, if they share the access path and controller for that access path fails,) so ASM will only create single mirror on them and will try to create another mirror in another failure group. For example, you assign disk1 and disk2 to the same failure group: ASM will never create a mirror of a block from disk1 on disk2, it will only mirror to a different failure group. Note that if your storage is already RAIDed, EXTERNAL redundancy diskgroups are pretty safe: hardware RAIDs are usually more efficient than NORMAL redundancy ASM groups while maintaining the same level of protection, thanks to hardware acceleration and large caches they sport these days.
    3. Not really, as long as you follow the documented procedures and have Oracle patched to the current patchset level. However, if you employ ASMLIB, there might be issues that differ by the storage vendor.
    4. If you are sure that no other disk will fail within those 4 hours, hot spare is probably not that necessary. If availability is of concern, always plan for the worst case though. Having hot spare will protect you from such second failure while the replacement is en route.
    Regards,
    Vladimir M. Zakharychev

  • When is the average rps computed? (check-request-limits throttling)

    Hi all,
    The documentation indicates that the default interval for computing the average request per second is 30 seconds. My question is: When does this 30 second start and end? Is it based on the server clock? That is, it computes the average rps @ 01:01:00 then @ 01:01:30 then @ 01:02:00 etc? Or is it based on the server's up-time? Or perhaps based on another algorithm all together?
    I simply need to know the frame of reference for computing the average rps. The documentation and the blog posts by Jyri Virkki are not clear about this.
    Thank you in advance,

    IIRC, the interval is computed on demand when needed for a given rule. The interval is a minimum interval, if no matching requests arrive then the rps won't be computed again until needed.
    FYI Sun open-sourced most of the Web Server code, so you can check the implementation directly if you wish.
    Looks like the relevant part is starting on line 308 of http://heliod.svn.sourceforge.net/viewvc/heliod/trunk/src/server/safs/reqlimit.cpp?revision=2&view=markup

  • Concept explanation with some examples

    hi,
    what is a debit and credit note with some examples
    regards
    sudharshan

    Hello.
    In AP:
    Debit Memo-Is a record you enter to debit your supplier. It decreases your debt to the supplier. It can be issued for reasons like disagreement about prices. When you pay an invoice to your supplier you can include the debit memo so the payment will be in a lower amount.
    Credit Memo-It works the same way but normally it is issued by your supplier.
    In AR
    Debit Memo-It works like a normal invoice but it can be issued out of the normal invoicing system, to correct an invoice, for example. It increases the debt from you customer.
    Credit Memo-It works on opposite of the debit memo.
    Octavio

  • Configurat​ion check request

    we need test 10 pieces of thermocouples ,3 LVDT signals and 3 strain guage at one time.we both want the configuration of the signal conditioning and DAQ device meet our requirements and the low cost,so we select the following configuration:
    SCXI-1000
    SCXI-1520
    SCXI-1330(instead of SCXI-1314)
    SCXI-1540
    SCXI-1330(instead of SCXI-1315)
    SCXI-1102
    SCXI-1330(instead of SCXI-1300)
    PCI-6023E(12 bit is enough)
    SSH6868
    SCXI-1360,SCXI-1361
    we hope these selections can make a whole DAQ system,could you check the configuration and please tell us if there is existing any mistake,and welcom better configuration. thanks

    Instead of the SSH6868, you will need a shielded cable assembly to connect your 60xxE (E-series card) to the SCXI chassis. You will require the SCXI-1349 cable assembly. It has two parts 1)SH6868 cable and 2) an adapter to connect to the SCXI backplane.
    As for the choice of the DAQ card,you may find the low cost approach severly limits the system bandwidth and overall accuracy. The low cost E-series boards only have 512 samples of onboard memory, as opposed to 8192 for the regular E-series. This makes controlling buffered analog input more difficult. In addition, you need to understand that the rated sampling rate for the board is based upon a single data channel. As you have 16 channels, your "best" sampling rate would be approximately 12.5 KHz per channel. The actual rate will be less. You have made no mention of the type of data you want to acquire (steady-state, dynamic, time domain, frequency domain), so, I can not give any better insight into the "best" sampling rate you could achieve.
    Now, considering that you are mixing thermocouple (TC) readings with other inputs on the same DAQ call, you may find the 12-bit selection aggravates your data quality. When you acquire data using a multiplexer, the quoted accuracy for the device assumes that all of the voltage levels on the various channels are very near the same value. This allows a relatively uniform settling time for the input signal. It is quite possible that the TC inputs will be on the order of 2-10 mV while the other signals could range as high as 0.5 volts. Such a difference in input levels can lead to settling errors that are not obvious to the user, but can cause gross errors in the measured data levels. If you set the input range (only 4 for the low cost e-series) to +0.5/-0.5 volts, your resolution for the TCs will be about +/- 0.24 mV or about 6-10 degrees of error. (ouch!!). So you will be required to set different input ranges for the different input channels (0.05/-0.05 for the TC and +5/-5 or +.5/-.5 for the remaining channels. These multiple range calls can further limit your bandwidth by slowing down the "round-robin" interval between the data range changes. I know that your goal is low cost; however, you may find that the data quality is unacceptable. You will want to consider, at the very least, upgrading to the 6034E board. It is a 16-bit device (still only four input ranges); however, if you configure the board for only 1 input range, +0.5/-0.5 volts, your bandwith will improve and the TC resolution will improve by a factor of 16 times to approximately 0.015 mV (under 1 degree).

  • Plug-ins w/ Flash Media Playback -- Request for Examples

    The FMP FAQ states:
    Flash Media Playback ... does support dynamically loaded plug-ins from  third-party service providers.
    Unfortunately, the FMP Setup  page does not include any fields for setting plug-in related values.
    Can anyone provide links to any examples of FMP loading  plug-ins?
    For instance the SMILPlugin or the YouTubePlugin?
    If the YouTubePlugin can be used with FMP, a link to an example may be  helpful in responding on this forums post:
    http://forums.adobe.com/message/3815212
    Thanks,
    g

    Hi Greg,
    FMP supports plugins, but for security reasons, they have to be loaded from whitelisted domains. I could add some examples for FMP, but the users will be pretty much restricted to those plugins - since they will not be allowed to use their plugins hosted on untrusted domains. In this way, cross site scripting and tracking risks are much reduced.
    If a video site owner wants to benefit from this advantage without hosting his own SMP, please let him contact us.
    You can find a short description of the feature here:
    http://help.adobe.com/en_US/FMPSMP/Dev/WSc6f922f643dd2e6d-10c1507912c0d5d107e-7fff.html

Maybe you are looking for

  • Adding Word attachment to a SOFM object and then to Workflow container

    Hi! I have a Web Dynpro for ABAP application that should send attachments of uploaded files to a workflow container. I have already managed to do this, and it works fine for TXT files, but when I try to attach a WORD (.DOC) file the file looks corrpu

  • How to read fields from an ABAP Programa

    Hello everyone. The need of the client is to create an Interactive form and first ask for some data in order to query the rest of the information needed for the layout . All this is using only ECC Backend, we do not have portal yet. So I'm not sure i

  • Hiding selection screen fields upon clicking a button

    Hi . I have a requirement like we need to hide and bring back the selection screen fields upon clicking a button( Expand and collapse in the same button) . in ABAP Query,we can maintain this,  for the variant we created. We need this functionality on

  • Problems with 3D graph application redistribution

    Hello wireworkers, i wonder if someone encountered problems with distribution of application that uses 3D graph objects. I've created application that uses NI 3D graph component and created installer with "Enable 3D graph support" check box enabled.

  • Partition of table in ODI

    Hi..friends. can we use partition of table in ODI 11g? please give an example with video or picture representation. Regards Soumya.