Read-Through Caching with expiry-delay and near cache (front scheme)

We are experiencing a problem with our custom CacheLoader and near cache together with expiry-delay on the backing map scheme.
I was under the assumption that it was possible to have an expiry-delay configured on the backing-scheme and that the near cache object was evicted when backing object was evicted. But according to our tests we have to put an expiry-delay on the front scheme too.
Is my assumption correct that there will not be automatic eviction on the near cache (front scheme)?
With this config, near cache is never cleared:
             <near-scheme>
                  <scheme-name>config-part</scheme-name>
                  <front-scheme>
                        <local-scheme />
                  </front-scheme>
                  <back-scheme>
                        <distributed-scheme>
                              <scheme-ref>config-part-distributed</scheme-ref>
                        </distributed-scheme>
                  </back-scheme>
            <autostart>true</autostart>
            </near-scheme>
            <distributed-scheme>
                  <scheme-name>config-part-distributed</scheme-name>
                  <service-name>partDistributedCacheService</service-name>
                  <thread-count>10</thread-count>
                  <backing-map-scheme>
                        <read-write-backing-map-scheme>
                              <read-only>true</read-only>
                              <scheme-name>partStatusScheme</scheme-name>
                              <internal-cache-scheme>
                                    <local-scheme>
                                          <scheme-name>part-eviction</scheme-name>
                                          <expiry-delay>30s</expiry-delay>
                                    </local-scheme>
                              </internal-cache-scheme>
                              <cachestore-scheme>
                                    <class-scheme>
                                          <class-name>net.jakeri.test.PingCacheLoader</class-name>
                                    </class-scheme>
                              </cachestore-scheme>
                              <refresh-ahead-factor>0.5</refresh-ahead-factor>
                        </read-write-backing-map-scheme>
                  </backing-map-scheme>
                  <autostart>true</autostart>
                  <local-storage system-property="tangosol.coherence.config.distributed.localstorage">true</local-storage>
            </distributed-scheme>With this config (added expiry-delay on front-scheme), near cache gets cleared.
        <near-scheme>
                  <scheme-name>config-part</scheme-name>
                  <front-scheme>
                        <local-scheme>
                             <expiry-delay>15s</expiry-delay>
                        </local-scheme>
                  </front-scheme>
                  <back-scheme>
                        <distributed-scheme>
                              <scheme-ref>config-part-distributed</scheme-ref>
                        </distributed-scheme>
                  </back-scheme>
            <autostart>true</autostart>
            </near-scheme>
            <distributed-scheme>
                  <scheme-name>config-part-distributed</scheme-name>
                  <service-name>partDistributedCacheService</service-name>
                  <thread-count>10</thread-count>
                  <backing-map-scheme>
                        <read-write-backing-map-scheme>
                              <read-only>true</read-only>
                              <scheme-name>partStatusScheme</scheme-name>
                              <internal-cache-scheme>
                                    <local-scheme>
                                          <scheme-name>part-eviction</scheme-name>
                                          <expiry-delay>30s</expiry-delay>
                                    </local-scheme>
                              </internal-cache-scheme>
                              <cachestore-scheme>
                                    <class-scheme>
                                          <class-name>net.jakeri.test.PingCacheLoader</class-name>
                                    </class-scheme>
                              </cachestore-scheme>
                              <refresh-ahead-factor>0.5</refresh-ahead-factor>
                        </read-write-backing-map-scheme>
                  </backing-map-scheme>
                  <autostart>true</autostart>
                  <local-storage system-property="tangosol.coherence.config.distributed.localstorage">true</local-storage>
            </distributed-scheme>

Hi Jakkke,
The Near Cache scheme allows to have configurable levels of cache coherency from the most basic expiry based cache to invalidation based cache to data versioning cache depending on the coherency requirements. The Near Cache is commonly used to achieve the performance of replicated cache without losing the scalability aspects of replicated cache and this is achieved by having a subset of data (based on MRU or MFU) in the <front-scheme> of the near cache and the complete set of data in the <back-scheme> of near cache. The <back-scheme> updates can automatically trigger events to invalidate the entries in the <front-scheme> based on the invalidation strategy (present, all, none, auto) configured for the near cache.
If you want to expire the entries in the <front-scheme> and <back-scheme>, you need to specify an expiry-delay on both the schemes as mentioned by you in the last example. Now if you are expiring the items in the <back-scheme> for the reason that they get loaded again from the cache-store but the <front-scheme> keys remain same (only the values should be refreshed from the cache store) then you need not set the expiry-delay on the <front-scheme> rather mention the invalidation-strategy as present. But if you want to have a different set of entries in <front-scheme> after a specified expiry delay then you need to mention it in the <front-scheme> configuration.
The near cache has the capability to keep front scheme and back scheme data in sync but the expiry of entries is not synced. Always, front-scheme is a subset of back-scheme.
Hope this helps!
Cheers,
NJ

Similar Messages

  • Problem with Expiry Period for Multiple Caches in One Configuration File

    I need to have a Cache System with multiple expiry periods, i.e. few records should exist for, lets say, 1 hour, some for 3 hours and others for 6 hours. To achieve it, I am trying to define multiple caches in the config file. Based on the data, I choose the Cache (with appropriate expiry period). Thats where, I am facing this problem. I am able to create the caches in the config file. They have different eviction policies i.e. for Cache1, it is 1 hour and for Cache2, it is 3 Hours. However, the data that is stored in Cache1 is not expired after 1 hour. It expires after the expiry period of other cache i.e.e Cache2.
    Plz correct me if I am not following the correct way of achieving the required. I am attaching the config file here.<br><br> <b> Attachment: </b><br>near-cache-config1.xml <br> (*To use this attachment you will need to rename 142.bin to near-cache-config1.xml after the download is complete.)

    Hi Rajneesh,
    In your cache mapping section, you have two wildcard mappings ("*"). These provide an ambiguous mapping for all cache names.
    Rather than doing this, you should have a cache mapping for each cache scheme that you are using -- in your case the 1-hour and 3-hour schemes.
    I would suggest removing one (or both) of the "*" mappings and adding entries along the lines of:
    <pre>
    <cache-mapping>
    <cache-name>near-1hr-*</cache-name>
    <scheme-name>default-near</scheme-name>
    </cache-mapping>
    <cache-mapping>
    <cache-name>near-3hr-*</cache-name>
    <scheme-name>default-away</scheme-name>
    </cache-mapping>
    </pre>
    With this scheme, any cache that starts with "near-1hr-" (e.g. "near-1hr-Cache1") will have 1-hour expiry. And any cache that starts with "near-3hr-" will have 3-hour expiry. Or, to map your cache schemes on a per-cache basis, in your case you may replace "near-1hr-*" and "near-3hr-*" with Cache1 and Cache2 (respectively).
    Jon Purdy
    Tangosol, Inc.

  • Storage disabled nodes and near-cache scheme

    This probably is a newbie question. I have a named cache with a near cache scheme, with a local-scheme as the front tier. I can see how this will work in a cache-server node. But I have a application node which pushes a lot of data into the same named cache, but it is set to be storage disabled.
    My understanding of a local cache scheme is that data is cached locally in the heap for faster access and the writes are delegated to the service for writing to backing map. If my application is storage disabled, is the local cache still used or is all data obtained from the cache-servers?

    Hello,
    You understanding is correct. To answer your question writes will always go through the cache servers. A put will also always go through the cache servers but the near cache may or may not be populated at that point.
    hth,
    -Dave

  • Help to read a table with data source and convert time stamp

    Hi Gurus,
      I have a req and need to write a ABAP prog. As soon as i excute ABAP program it should ask me enter a data source name, then my ABAP prog has excute teh code, in ABAP code i have to read a table with this data source as key, sort time stamp from table and should display the data source and time stamp as output.
    As follows:
    Enter Data Source Name: 
    Then user enters : 2lis_11_vahdr
    Then out put should be "Data source  :"  10-15-2008.
    The time stamp format in table is 20,050,126,031,520 (YYYYMMDDhhmmss). I have to display as 05-26-2005. Any help would be apprciated.
    Thanks,
    Ram

    Hi Jayanthi Babu Peruri,
    I tried to extract YEAR, MONTH, DAY separately and using
    EDIT MASK written it.
    Definitely there will be some STANDARD CONVERSION ROUTINE will be there. But no idea about it.
    DATA : V_TS      TYPE TIMESTAMP,
           V_TS_T    TYPE CHAR16,
           V_YYYY    TYPE CHAR04,
           V_MM      TYPE CHAR02,
           V_DD      TYPE CHAR02.
    START-OF-SELECTION.
      GET TIME STAMP FIELD V_TS.
      V_TS_T = V_TS.
      CONDENSE V_TS_T.
      V_YYYY = V_TS_T.
      V_MM   = V_TS_T+4(2).
      V_DD   = V_TS_T+6(2).
      V_TS_T(2) = V_MM.
      V_TS_T+2(2) = V_DD.
      V_TS_T+4(4) = V_YYYY.
      SKIP 10.
      WRITE : /10 V_TS," USING EDIT MASK '____-__-________'.
              /10 V_YYYY,
              /10 V_MM,
              /10 V_DD,
              /10 V_TS_T USING EDIT MASK '__-__-__________'.
    If you want DATE alone, just declare the length of V_TS_T as 10.
    Regards,
    R.Nagarajan.
    We can -

  • Reader X integration with Windows Explorer and protected mode

    Hello all.
    I'm having some problems with Adobe Reader X under Windows 7 and it seems that I need to disable protected mode. I disabled it by unchecking the related option in edit-preferences, also I created the registry key for the local user (bProtectedMode = 0).
    After doing that, the protected mode seems to be disables when I open pdf documents with Reader X, but it's still enabled when previewing documents thumbnails in windows explorer. The issue is that when I open a folder containing pdf files and left click or right click them, two AcroRd32.exe processes spawn and it seems to me that it's a protected mode behaviour (broker process + sandboxed renderer process).
    I'd like to know if it's possible to fully disable protected mode so that it remains disabled even when I'm navigating directories with Windows Explorer and documents are previewed. My current Adobe Reader version is 10.1.3.
    Thank you very much.

    Hi Fernando,
    Here's a statement that hopefully helps or at least solves your mystery:
    While Protected Mode can be disabled for PDFs viewed with the product, Adobe continues to protect you when 3rd party software invokes a Reader process; that is, Protected Mode sandboxing cannot be disabled for shell extensions. For example, when you use Windows Explorer to preview a PDF in the Preview Pane, it starts a Reader process to display the preview. In such cases, Task Manager shows that two AcroRd32.exe processes spawn and that the operation is occurring with Protected Mode enabled.
    Ben

  • BC.Next Caching With Content Holders and Includes

    I noticed that the BC.Next caching on scripts, css and images doesn't appear to work with content holders and includes. I assume that is just something that still needs to be worked on?
    Thoughts?
    Thank You!

    Hi Liam,
    Static content covers images, styles and scripts from pages and templates. Blogs are not supported at this point. We'll follow-up on that as we will be adding a method to cache some content from modules as well. But for now, pages and templates represent a huge chuck of the BC content and we wanted to start with that first.
    Cristinel

  • I can't clear or move my cache [with FF 15 and FF16)

    I can't move or clear my cache! When I go to options>advance>network, there is just blank space under "Cached Web Content". But when I go to about:cache:
    Memory cache device
    Number of entries: 731
    Maximum storage size: 32768 KiB
    Storage in use: 15459 KiB
    Inactive storage: 15558 KiB
    List Cache Entries
    And I note that it doesn't list a location [whereas for the offine it does:
    Number of entries: 0
    Maximum storage size: 512000 KiB
    Storage in use: 0 KiB
    Cache Directory: C:\Users\Bernie\AppData\Local\Mozilla\Firefox\Profiles\1rp651v2.default\OfflineCache
    I've tried to move the cache directory and I set the browser.cache.disk.parent_directory in my about:config, but nothing has ever gotten written to that directory [it is already created].. And in fact, I can't find my cache: it says that
    there's 15megs somewhere, but I can't find where: I've poked in the profiles\...default hierarchy and there's basically zero spaced used except for 66 megs in "urlclassifier3.sqlite".
    I noticed this with FF15 and I'd hoped that it would be fixed when I upgraded to FF16, but it is all still the same.

    I have the same problem, updated to FF 17 today and the problem persists.
    I dont have any settings that modifies the cache's behaviour, everything is as it should be but after i installed an extension a week ago FF wouldnt start at all !, so i reseted FF, extension was gone, replaced my old profile from FF (since the reset deleted all my settings) and everything was back EXCEPT this cache BUG, because of this cache bug my NEW TAB (top pages u visited) is also bugged as i cant remove any site that i dont want in my top 5...

  • Problems reading PDF files with i Pad and Mac with 10.8.3 Mountain Lion OS

    I assume InDesign uses the Acrobat XI resources to generate the exported PDFs.
    I have many 1-7 page articles from which I created PDFs via the export function in ID. They all read OK on PC/Apple/Linux based computers.
    I assembled all of the articles into one 50 page document in ID and again exported the large file as a PDF.
    Several of the articles inside the large document now exhibit problems when viewed on an i Pad or a Mac running OS 10.8.3 (Mountain Lion). PC and Linux do not have this problem.
    The problems are, to quote my subscriber: "each paragraph starts with a 'dollar' sign instead of 'Th'. Later the 'Dollar' symbol changes to ' " '. The same thing happens in the Piston Ring article. This doesn't happen on the original PDF files.
    Opening the files in Acrobat Pro XI to look for differences does not discover anything although I assume the problem is somewhere in the font files.
    Any suggestions?

    I've asked the question about the versions and producers of the PDF reader software he is using. I'll be back when that answer comes in.
    I was afraid that Adobe might not use code identical to Acrobat Pro XI to generate PDFs in InDesign. I may have to 'print' to Acrobat and see if that generates a trouble-free PDF.

  • Macbook sleeps with long delay and tons of system messages

    It takes my MacBook Pro a long time to go to sleep (4 minutes or more). During this time thousands of messages are produced in the System log.
    There are so many of them that some messages are discarded. Some of these messages:
    05/10/2014 16:31:08.000 kernel[0]: [0x445334a000, 0x2000]
    05/10/2014 16:31:08.000 kernel[0]: [0x445335c000, 0x2000]
    05/10/2014 16:31:08.000 kernel[0]: [0x4453372000, 0x2000]
    05/10/2014 16:31:08.000 kernel[0]: [0x4453384000, 0x2000]
    05/10/2014 16:31:08.000 kernel[0]: *** kernel exceeded 500 log message per second limit  -  remaining messages this second discarded ***
    05/10/2014 16:31:09.000 kernel[0]: [0x2cb737000, 0x3000]
    05/10/2014 16:31:09.000 kernel[0]: [0x2cb84c000, 0x8000]
    05/10/2014 16:31:09.000 kernel[0]: [0x2cb855000, 0xb000]
    05/10/2014 16:31:09.000 kernel[0]: [0x2cb862000, 0x3000]
    What do these messages mean?
    I have tried, without success, the following things:
    - reinstall the system
    - perform a disk verification
    - repair file permissions
    - boot in safe mode
    What I have not tried is a hardware check, because I couldn't. (Starting up with D pressed has no effect and starting up with option-D pressed initiates the check, but this is aborted because my machine doesn't support it.)
    The behavior takes place with hibernation, not with "unsafe sleep".
    Machine: MacBook Pro 15" mid-2014, 2.4 GHz Core i5, 8GB memory, with Mac OS 10.9.5.
    System messages at the start of the sleep:
    05/10/2014 16:30:58.114 loginwindow[39]: CGSSetWindowTags: Invalid window 0xffffffff
    05/10/2014 16:30:59.404 WindowServer[100]: _CGXHWCaptureWindowList: No capable active display found.
    05/10/2014 16:31:05.000 kernel[0]: AirPort_Brcm43xx::powerChange: System Sleep
    05/10/2014 16:31:07.000 kernel[0]: hibernate image path: /var/vm/sleepimage
    05/10/2014 16:31:07.000 kernel[0]: efi pagecount 125
    05/10/2014 16:31:07.000 kernel[0]: hibernate_page_list_setall(preflight 1) start 0xffffff80e6133000, 0xffffff80e6173000
    05/10/2014 16:31:07.000 kernel[0]: hibernate_page_list_setall time: 258 ms
    05/10/2014 16:31:07.000 kernel[0]: pages 2044563, wire 264046, act 663382, inact 612315, cleaned 46 spec 1409, zf 27302, throt 0, compr 476063, xpmapped 40000
    05/10/2014 16:31:07.000 kernel[0]: could discard act 0 inact 0 purgeable 0 spec 0 cleaned 0
    05/10/2014 16:31:07.000 kernel[0]: WARNING: hibernate_page_list_setall skipped 87800 xpmapped pages
    05/10/2014 16:31:07.000 kernel[0]: hibernate_page_list_setall preflight pageCount 2044563 est comp 43 setfile 3898605568 min 4294967296
    05/10/2014 16:31:07.000 kernel[0]: [0x13b4cc3000, 0xbc9000]
    05/10/2014 16:31:07.000 kernel[0]: [0x13d0f85000, 0xbbc000]
    In between: thousands of these kernel messages
    System messages at the end of the sleep:
    05/10/2014 16:31:11.000 kernel[0]: [0x4624b20000, 0x1000]
    05/10/2014 16:31:11.000 kernel[0]: [0x4624b2c000, 0xe2000]
    05/10/2014 16:31:11.000 kernel[0]: *** kernel exceeded 500 log message per second limit  -  remaining messages this second discarded ***
    05/10/2014 16:34:25.000 kernel[0]: hibernate_page_list_setall(preflight 0) start 0xffffff80e6133000, 0xffffff80e6173000
    05/10/2014 16:34:25.000 kernel[0]: hibernate_page_list_setall time: 296 ms
    05/10/2014 16:34:25.000 kernel[0]: pages 2048039, wire 266098, act 663683, inact 612389, cleaned 40 spec 1946, zf 27756, throt 0, compr 476127, xpmapped 40000
    05/10/2014 16:34:25.000 kernel[0]: could discard act 0 inact 0 purgeable 0 spec 0 cleaned 0
    05/10/2014 16:34:25.000 kernel[0]: WARNING: hibernate_page_list_setall skipped 87800 xpmapped pages
    05/10/2014 16:34:25.000 kernel[0]: hibernate_page_list_setall found pageCount 2048039
    05/10/2014 16:34:25.000 kernel[0]: IOHibernatePollerOpen, ml_get_interrupts_enabled 0
    05/10/2014 16:34:25.000 kernel[0]: IOHibernatePollerOpen(0)
    05/10/2014 16:34:25.000 kernel[0]: encryptStart 9c190
    05/10/2014 16:34:25.000 kernel[0]: bitmap_size 0x3f784, previewSize 0x24f278, writing 2047182 pages @ 0x32ab8c
    05/10/2014 16:34:25.000 kernel[0]: encryptEnd 95f4600
    05/10/2014 16:34:25.000 kernel[0]: image1Size 0x1430d000, encryptStart1 0x9c190, End1 0x95f4600
    05/10/2014 16:34:25.000 kernel[0]: encryptStart 1430d000
    05/10/2014 16:34:25.000 kernel[0]: PMStats: Hibernate write took 192562 ms
    05/10/2014 16:34:25.000 kernel[0]: all time: 192562 ms, comp bytes: 6044094464 time: 5593 ms 1030 Mb/s, crypt bytes: 4112823408 time: 6386 ms 614 Mb/s,
    05/10/2014 16:34:25.000 kernel[0]: image 0 (0%), uncompressed 6044090368 (427032), compressed 4284058624 (70%), sum1 9a59058d, sum2 8143d3e0
    05/10/2014 16:34:25.000 kernel[0]: zeroPageCount 97178, wiredPagesEncrypted 155568, wiredPagesClear 109748, dirtyPagesEncrypted 1210326
    05/10/2014 16:34:25.000 kernel[0]: hibernate_write_image done(e00002e8)
    05/10/2014 16:34:25.000 kernel[0]: sleep
    05/10/2014 16:34:25.000 kernel[0]: Wake reason: EC LID0
    05/10/2014 16:34:25.000 kernel[0]: SMC::smcHandleInterruptEvent WARNING status=0x0 (0x40 not set) notif=0x0
    05/10/2014 16:34:38.000 kernel[0]: full wake (reason 1) 23 ms
    05/10/2014 16:34:38.000 kernel[0]: AirPort_Brcm43xx::powerChange: System Wake - Full Wake/ Dark Wake / Maintenance wake
    05/10/2014 16:34:38.000 kernel[0]: Previous Sleep Cause: 5
    05/10/2014 16:34:38.000 kernel[0]: wlEvent: en1 en1 Link DOWN virtIf = 0
    05/10/2014 16:34:38.000 kernel[0]: AirPort: Link Down on en1. Reason 8 (Disassociated because station leaving).
    05/10/2014 16:34:38.000 kernel[0]: **** [IOBluetoothHostControllerUSBTransport][SuspendDevice] -- Resume -- suspendDeviceCallResult = 0x0000 (kIOReturnSuccess) -- 0x1400 ****
    05/10/2014 16:34:38.616 WindowServer[100]: CGXDisplayDidWakeNotification [42269999693223]: posting kCGSDisplayDidWake
    05/10/2014 16:34:38.617 WindowServer[100]: handle_will_sleep_auth_and_shield_windows: Ordering out authw 0x7ff86b72bc70(2000), shield 0x7ff868f6d380(2001) (lock state: 2)
    05/10/2014 16:34:38.618 WindowServer[100]: handle_will_sleep_auth_and_shield_windows: errs 0x0, 0x0, 0x0
    05/10/2014 16:34:40.949 loginwindow[39]: ERROR | -[LWBuiltInScreenLockAuthLion closeAuthAndReset:] | Attempted to remove an observer when not observing
    05/10/2014 16:34:40.000 kernel[0]: ASP_TCP Disconnect: triggering reconnect by bumping reconnTrigger from curr value 5 on so 0xffffff8027164178
    05/10/2014 16:34:41.000 kernel[0]: AFP_VFS afpfs_DoReconnect:  doing reconnect on /Volumes/.TMBACKUP-1

    Thanks for the tip. I checked out the link. The issue is not that my log files are large but it's that log messages (the same message over and over again) are being constantly written to that Console application - I guess by syslogd - at a crazy rate per second which is causing the CPU to spike and causing my Free RAM to go from say 3GB to a few MB's within a matter of seconds. I actually can't find a log file on my system (via Spotlight search or Terminal "grep" searching) that has the hundreds and thousands of messages showing up in the Console log app. syslogd just gets "crazy" and pretty much takes over the CPU and depletes my computer free RAM when handling those:
    0x0-0x18018.com.apple.logic.pro358 ARGH: preNdx < 0 (-1073741821)
    messages.

  • How to read Images associate with Purchase Requisitions and its Line Items

    Hello,
    My scenario: i need to read the Images associated with PR and its line items and pass them as Idoc FM.
    Any idea where are these images stored in the SAP table or they just available till runtime
    Regards,
    Abhishek

    You can have a look at the link here [GOS Contents|http://help-abap.zevolving.com/2009/02/generic-object-services-gos-toolbar-part-5-get-note-attachment-contents/]
    Please note that in your case if its attachments only then pass la_relat-low = 'ATTA*'. if there is no filtering then pass 'NOTE', 'URL', 'ATTA' as required.
    For the business object of PR pass BUS2105 and instance id pass the PR no.
    The tables related to this are srrelroles,srgbinrel,sood etc.

  • File Adapter : read XML file with data validation and file rejection ?

    Hello,
    In order to read a XML file with the file adapter, I have defined a XSD that I have imported to my project.
    Now the File Adapter reads the file correctly but it does not give an error when:
    - the data types are not valid. Ex: dateTime is expected in a node and a string is provided
    - the XML file has invalid attributes.
    How can I manage error handling for XML files ?
    Should I write my own Java XPath function to validate the file after is processed ? (here is an example for doing this : http://www.experts-exchange.com/Web/Web_Languages/XML/Q_21058568.html)
    Thanks.

    one option is to specify validateXML on the partnerlink (that describes the file adapter endpoint) such as shown here
    <partnerLinkBinding name="StarLoanService">
    <property name="wsdlLocation"> http://<hostname>:9700/orabpel/default/StarLoan/StarLoan?wsdl</property>
    <property name="validateXML">true</property>
    </partnerLinkBinding>
    hth clemens

  • Adobe Reader 8 compatible with Acrobat Standard and Pro 7?

    Our organization has Adobe Reader 7 deployed to all users. Some users also have Acrobat Standard 7 or Acrobat Pro 7. We want to upgrade all users to Adobe Reader 8 (or maybe 9) but do not want to upgrade Standard and Pro users yet.
    I have not been able to find any information about compatibility issues with Reader 8 and older versions of Acrobat, but last time we did an upgrade in similar fashion we experienced issues. Any insight?

    Thanks for the reply. I stumbled across something today after posting. I had already known that If the ARX user also had one of the other products (AAXS or AAXP), then they would get the dialog box. Here is what I just figured out: if the user uninstalled ARX and re-installed (still having both ARX and AAXS or AAXP in the end) then they no longer get the dialog box. It seems like perhaps its an issue the AAXP and AAXS are overwriting something in the registry.

  • Querying Oracle Text using phrase with equivalence operator and NEAR

    Hello,
    I have two queries I'm running that are returning puzzling results. One query is a subset of the other. The queries use a NEAR operator and an equivalence operator.
    Query 1:
    NEAR((sister,father,mother=yo mama=mi madre),20) This is returning 3 results
    I believe Query 1 should return all records containing the words sister AND father AND (mother OR yo mama OR mi madre) that are within 20 words of each other.
    Query 2 (a subset of Query 1):
    NEAR((sister,father,mother=yo mama),20) This is returning 5 results
    I believe Query 2 should return all records containing the words sister AND father AND (mother OR yo mama) that are within 20 words of each other.
    Why would Query 1 be returning fewer results than Query 2, when Query 2 is a subset of Query 1? Shouldn't Query 1 return at least the same amount or more results than Query 2?
    ~Mimi

    For future questions about Oracle Text, you can try the Oracle Text forum at: Text
    There you have more chances of recieveing an awnser.

  • Read/Write speeds with Airport Extreme and USB Hard Drives

    Anybody know how fast the read/write speeds are with a USB hard drive and the Airport Extreme Base Station?
    My Setup
    I have a 17 MBP (2010) and a 13 MBP (2011) - no SSD's
    MBP generally connect via 802.11n 5Ghz with a very strong signal
    Gigabit connection on a Windows Desktop
    I have a variety of external drives (7200, 5400, Drobo) at varying capacity (320GB, 2TB, 5.6TB)
    Goals
    Backup: Time machine
    Backup: File copies of pictures and home videos (greater than 1GB files)
    Backup: Crashplan
    File sharing: Aperture libraries--not sure if that is possible or practical. That is something that I need to research further...but on the off chance people have experience with it.
    Thanks!

    ishjen wrote:
    Anybody know how fast the read/write speeds are with a USB hard drive and the Airport Extreme Base Station?
    Welcome to Apple's discussion groups.
    Apple's AirPort base stations aren't known for fast file serving, but for most purposed they're fast enough.
    I can't comment specifically on your Aperture plan, but with some software it's important to avoid simultaneous access by more than one user.

  • Playing sound from microphone on speakers (through buffer) with constant delay

    Hi,
    I'm looking for an example of code that samples signal from microphone and
    plays it on speakers. I need a solution that has a resonably constant delay on
    different platforms (PC, android, iphone). Delay around 1-2 s is ok for me, and I don't
    mind if it varies everytime the application starts.
    I tried using SampleDataEvent.SAMPLE_DATA event on Sound and Microphpne classess.
    One event would put data into buffer the other would read data.
    But it seems impossible to provide constant delay, either the delay grows constantly or
    it gets lower to the point where I have less than 2048 samples to put out and Sound class stops
    generating SampleDataEvent.SAMPLE_DATA events.
    I wan't to process each incoming frame so using setLoopBack(true) is not an option.
    ps This is more a problem on Android devices than on PC. Althought when I start to resize application
    window on PC delay starts to grow also.
    Please help.

    Most recording applications have a passthrough option so you can listen as you record. It is off by default to avoid feedback if you are using a microphone.

Maybe you are looking for