The mysterious case of the disappearing cache synchs

hi. i'm using activemq 4 with toplink 10g for cache synching. lately i've been losing a lot of synchs. through some debugging, i discovered that the producer is producing the proper change sets but by the time it gets to the consumer, stuff goes missing. for example, in one transaction, 4 objects will be changed and the producer (RemoteCommandManager#propagateCommand) shows four objects in the change set. however by the time it gets to the consumer (RemoteCommandManager#processCommandFromRemoteConnection), not all of the objects are in the change set. however, the size (or whatever the property in the command object is) still shows 4. does anybody know what could cause this behavior?
thanks in advance

If you are on iOS 8.2 then update to iOS 8.3.
If that doesn't solve the problem try the standard troubleshooting steps in this order:
Restart: Press On/Off button until the Slide to Power Off slider appears, select Slide to Power Off and, after It shuts down, press the On/Off button until the Apple logo appears.
Reset: Press the Home and On/Off buttons at the same time and hold them until the Apple logo appears (about 10-15 seconds). No data will be lost.
Restore: Connect your device to iTunes on your computer, backup, and then select Restore to Factory.
See here for more details on restore: https://support.apple.com/en-us/HT201252

Similar Messages

  • The Case of the Disappearing Documents

    The first time this happened to me, I figured I was just
    imagining things. Now that's it's happened to me a second time, I
    am certain that my imagination has nothing to do with it.
    I am working in Captivate 2 in an XP (SP2) environment. I've
    completed about a dozen projects successfully using Captivate 1 and
    am in the midst of creating my first project in version 2.
    I've created a Captivate 2 document with less than a dozen
    slides. I experience the disappearing menus problem quite
    frequently -- perhaps every half hour or so while using the program
    (I haven't tried the clicking on the library item trick yet, but
    will the next time it happens). Typically I click on the Save
    button and quit out of the program whenever it happens. One time
    when I did that, the CP file went missing from the folder it was
    saved in. Completely missing.
    The file icon itself was not in the Folder. The file name was
    no longer listed on the launch window of the program, nor was it
    listed in the File menu as a recently opened file. I know I am not
    crazy and figured the file had to be somewhere on my hard drive, so
    I started searching for files of a certain size which were updated
    in the last day. I actually located the file, which had been
    renamed renamed with a .tmp extension (and gobbledygook before the
    extension). I only recognized the file because of its size and when
    I hovered my mouse pointer over it, the document's name from the
    Properties dialog box showed up in a tool tip. I was able to open
    the file, rename it, and resume using it.
    I figured it was just a fluke...but now it's happened a
    second time, albeit slightly different circumstances. The program
    froze and BAM! the file completely disappeared. Again, I searched
    my hard drive and was able to locate the file.
    So now, I save a copy of my file about every half-hour in a
    separate folder...because I cannot afford to lose more than a
    half-hours' work. I am very worried about the stability of this
    product. Does anybody have any ideas regarding this issue?

    I don't know if version 3 will be any better, but I have my
    doubts. I am totally embarrassed about my failure to be able to
    complete this project on time for my client due to all of these
    issues. As a freelancer, the ability to make my clients' deadlines
    and provide a high-quality end product is crucial to my continued
    success. I am furious with Adobe for this product's instability.
    quote:
    Originally posted by:
    paddie_ooo
    Hi geekygURL:
    Several folks on our team have had the same issue. The files
    simply will not open... or the file appears but shows as 0 slides
    and 0kb in size. We're also working from c:// ... with 400G+ of
    space... and using std naming convention for files (no special
    characters etc).
    Unfortunatly, missed deadlines and extra non-billing hours
    will result in staffing issues and different purchasing decisions
    in the next fiscal year. Some folks are saying ... why did Adobe
    drop the "R" when they released Captivate2.
    Is Captivate3 really going to be any better? If so... how
    about a free upgrade for all of use who suffered through
    Captivate2? It's really tough to justify the expense of Captivate3
    to our bosses when all they have seen are the problems with
    Captiavte2.

  • Poor performance of the BDB cache

    I'm experiencing incredibly poor performance of the BDB cache and wanted to share my experience, in case anybody has any suggestions.
    Overview
    Stone Steps maintains a fork of a web log analysis tool - the Webalizer (http://www.stonesteps.ca/projects/webalizer/). One of the problems with the Webalizer is that it maintains all data (i.e. URLs, search strings, IP addresses, etc) in memory, which puts a cap on the maximum size of the data set that can be analyzed. Naturally, BDB was picked as the fastest database to maintain analyzed data on disk set and produce reports by querying the database. Unfortunately, once the database grows beyond the cache size, overall performance goes down the drain.
    Note that the version of SSW available for download does not support BDB in the way described below. I can make the source available for you, however, if you find your own large log files to analyze.
    The Database
    Stone Steps Webalizer (SSW) is a command-line utility and needs to preserve all intermediate data for the month on disk. The original approach was to use a plain-text file (webalizer.current, for those who know anything about SSW). The BDB database that replaced this plain text file consists of the following databases:
    sequences (maintains record IDs for all other tables)
    urls -primary database containing URL data - record ID (key), URL itself, grouped data, such as number of hits, transfer size, etc)
    urls.values - secondary database that contains a hash of the URL (key) and the record ID linking it to the primary database; this database is used for value lookups)
    urls.hits - secondary database that contains the number of hits for each URL (key) and the record ID to link it to the primary database; this database is used to order URLs in the report by the number of hits.
    The remaining databases are here just to indicate the database structure. They are the same in nature as the two described above. The legend is as follows: (s) will indicate a secondary database, (p) - primary database, (sf) - filtered secondary database (using DB_DONOTINDEX).
    urls.xfer (s), urls.entry (s), urls.exit (s), urls.groups.hits (sf), urls.groups.xfer (sf)
    hosts (p), hosts.values (s), hosts.hits (s), hosts.xfer (s), hosts.groups.hits (sf), hosts.groups.xfer (sf)
    downloads (p), downloads.values (s), downloads.xfer (s)
    agents (p), agents.values (s), agents.values (s), agents.hits (s), agents.visits (s), agents.groups.visits (sf)
    referrers (p), referrers.values (s), referrers.values (s), referrers.hits (s), referrers.groups.hits (sf)
    search (p), search.values (s), search.hits (s)
    users (p), users.values (s), users.hits (s), users.groups.hits (sf)
    errors (p), errors.values (s), errors.hits (s)
    dhosts (p), dhosts.values (s)
    statuscodes (HTTP status codes)
    totals.daily (31 days)
    totals.hourly (24 hours)
    totals (one record)
    countries (a couple of hundred countries)
    system (one record)
    visits.active (active visits - variable length)
    downloads.active (active downloads - variable length)
    All these databases (49 of them) are maintained in a single file. Maintaining a single database file is a requirement, so that the entire database for the month can be renamed, backed up and used to produce reports on demand.
    Database Size
    One of the sample Squid logs I received from a user contains 4.4M records and is about 800MB in size. The resulting database is 625MB in size. Note that there is no duplication of text data - only nodes and such values as hits and transfer sizes are duplicated. Each record also contains some small overhead (record version for upgrades, etc).
    Here are the sizes of the URL databases (other URL secondary databases are similar to urls.hits described below):
    urls (p):
    8192 Underlying database page size
    2031 Overflow key/data size
    1471636 Number of unique keys in the tree
    1471636 Number of data items in the tree
    193 Number of tree internal pages
    577738 Number of bytes free in tree internal pages (63% ff)
    55312 Number of tree leaf pages
    145M Number of bytes free in tree leaf pages (67% ff)
    2620 Number of tree overflow pages
    16M Number of bytes free in tree overflow pages (25% ff)
    urls.hits (s)
    8192 Underlying database page size
    2031 Overflow key/data size
    2 Number of levels in the tree
    823 Number of unique keys in the tree
    1471636 Number of data items in the tree
    31 Number of tree internal pages
    201970 Number of bytes free in tree internal pages (20% ff)
    45 Number of tree leaf pages
    243550 Number of bytes free in tree leaf pages (33% ff)
    2814 Number of tree duplicate pages
    8360024 Number of bytes free in tree duplicate pages (63% ff)
    0 Number of tree overflow pages
    The Testbed
    I'm running all these tests using the latest BDB (v4.6) built from the source on Win2K3 server (release version). The test machine is 1.7GHz P4 with 1GB of RAM and an IDE hard drive. Not the fastest machine, but it was able to handle a log file like described before at a speed of 20K records/sec.
    BDB is configured in a single file in a BDB environment, using private memory, since only one process ever has access to the database).
    I ran a performance monitor while running SSW, capturing private bytes, disk read/write I/O, system cache size, etc.
    I also used a code profiler to analyze SSW and BDB performance.
    The Problem
    Small log files, such as 100MB, can be processed in no time - BDB handles them really well. However, once the entire BDB cache is filled up, the machine goes into some weird state and can sit in this state for hours and hours before completing the analysis.
    Another problem is that traversing large primary or secondary databases is a really slow and painful process. It is really not that much data!
    Overall, the 20K rec/sec quoted above drop down to 2K rec/sec. And that's all after most of the analysis has been done, just trying to save the database.
    The Tests
    SSW runs in two modes, memory mode and database mode. In memory mode, all data is kept in memory in SSW's own hash tables and then saved to BDB at the end of each run.
    In memory mode, the entire BDB is dumped to disk at the end of the run. First, it runs fairly fast, until the BDB cache is filled up. Then writing (disk I/O) goes at a snail pace, at about 3.5MB/sec, even though this disk can write at about 12-15MB/sec.
    Another problem is that the OS cache gets filled up, chewing through all available memory long before completion. In order to deal with this problem, I disabled the system cache using the DB_DIRECT_DB/LOG options. I could see OS cache left alone, but once BDB cache was filed up, processing speed was as good as stopped.
    Then I flipped options and used DB_DSYNC_DB/LOG options to disable OS disk buffering. This improved overall performance and even though OS cache was filling up, it was being flushed as well and, eventually, SSW finished processing this log, sporting 2K rec/sec. At least it finished, though - other combinations of these options lead to never-ending tests.
    In the database mode, stale data is put into BDB after processing every N records (e.g. 300K rec). In this mode, BDB behaves similarly - until the cache is filled up, the performance is somewhat decent, but then the story repeats.
    Some of the other things I tried/observed:
    * I tried to experiment with the trickle option. In all honesty, I hoped that this would be the solution to my problems - trickle some, make sure it's on disk and then continue. Well, trickling was pretty much useless and didn't make any positive impact.
    * I disabled threading support, which gave me some performance boost during regular value lookups throughout the test run, but it didn't help either.
    * I experimented with page size, ranging them from the default 8K to 64K. Using large pages helped a bit, but as soon as the BDB cached filled up, the story repeated.
    * The Db.put method, which was called 73557 times while profiling saving the database at the end, took 281 seconds. Interestingly enough, this method called ReadFile function (Win32) 20000 times, which took 258 seconds. The majority of the Db.put time was wasted on looking up records that were being updated! These lookups seem to be the true problem here.
    * I tried libHoard - it usually provides better performance, even in a single-threaded process, but libHoard didn't help much in this case.

    I have been able to improve processing speed up to
    6-8 times with these two techniques:
    1. A separate trickle thread was created that would
    periodically call DbEnv::memp_trickle. This works
    especially good on multicore machines, but also
    speeds things up a bit on single CPU boxes. This
    alone improved speed from 2K rec/sec to about 4K
    rec/sec.Hello Stone,
    I am facing a similar problem, and I too hope to resolve the same with memp_trickle. I had these queries.
    1. what was the % of clean pages that you specified?
    2. What duration were you clling this thread to call memp_trickle?
    This would give me a rough idea about which to tune my app. Would really appreciate if you can answer these queries.
    Regards,
    Nishith.
    >
    2. Maintaining multiple secondary databases in real
    time proved to be the bottleneck. The code was
    changed to create secondary databases at the end of
    the run (calling Db::associate with the DB_CREATE
    flag), right before the reports are generated, which
    use these secondary databases. This improved speed
    from 4K rec/sec to 14K rec/sec.

  • An error occurred in the core cache

    Hi All
    <b>I have posted this Message almost 3 to 4 time , but no one has replied , as they have never come across the problem, if this is the case , we shud take those ases first. pls help !!!!</b>
    Case : ISA B2C CRM
    Deploying through SDM
    I have created New Custom B2C as per Extension Template
    Action
    Business Object Manager
    Business Object
    Backend Interface
    Backend Object
    Created Entries in bom-config.xml & backendobject-config.xml as well
    Classes are as per Extension.
    My Modified Application was working Fine ==> It has passed all layers & retriving DATA from CRM & Displaying on Custom JSP.
    Now what is the intresting point i am facing is the EAR which has been deployed successfully & Running before Server Restart is only Working.
    , when i Create new EAR ( with Same context node) & deploy , after launching that Application this Errors Come( yes i have undeployed the Working EAR as the context node is same)
    500 Internal Server Error
    SAP J2EE Engine/6.40
    Application error occurred during request processing.
    Details: Error [com.sap.engine.services.servlets_jsp.server.exceptions.WebServletException:
    Error while servlet [action] is initialized with a run-as identity.],
    with root cause [java.security.PrivilegedActionException: null].
    Exception id: [0002A56BEBF7005700000112000007A400041641E2ED396E]
    Now it the same case with each n every EAR with different contextnode which i create through isa build tool ( but earlier it was not the case though i am following same procedure for creating ear)
    What is going Wrong !!!!!!!!!!!!!.
    Is there is any problem with SDM !!!!.
    When i have gone to Logs isaerror.log
    #1.5#0002A56BEBF7003E00000039000007A400041642BB7830DC#1150375776023#bccom.sapmarkets.isa.core.cache.CacheInitHandler#sap.com/crm.b2c_b2cApp#bccom.sapmarkets.isa.core.cache.CacheInitHandler#J2EE_ADMIN#542##ilggladev04_C4D_4756250#Guest#65318570fc6d11dac2c90002a56bebf7#SAPEngine_Application_Thread[impl:3]_19##0#0#Error#1#/#Plain###[undefined|system.cache.exception] An error occurred in the core cache "region 'XCM_SESSION_SCOPE' already exists in cache"# #1.5#0002A56BEBF7003E0000003A000007A400041642BB7A1624#1150375776132#bccom.sapmarkets.isa.core.cache.CacheInitHandler#sap.com/crm.b2c_b2cApp#bccom.sapmarkets.isa.core.cache.CacheInitHandler#J2EE_ADMIN#542##ilggladev04_C4D_4756250#Guest#65318570fc6d11dac2c90002a56bebf7#SAPEngine_Application_Thread[impl:3]_19##0#0#Error#1#/#Plain###[undefined|system.cache.exception] An error occurred in the core cache "region 'XCM_APP_SCOPE' already exists in cache"# #1.5#0002A56BEBF7003E0000003B000007A400041642BE2515F7#1150375820912#bccom.sapmarkets.isa.core.init.InitializationHanlder.performInitialization#sap.com/crm.b2c_b2cApp#bccom.sapmarkets.isa.core.init.InitializationHanlder.performInitialization#J2EE_ADMIN#542##ilggladev04_C4D_4756250#Guest#65318570fc6d11dac2c90002a56bebf7#SAPEngine_Application_Thread[impl:3]_19##0#0#Error#1#/#Plain###[undefined|system.initFailed] Initalization of com.sapmarkets.isa.core.xcm.init.ExtendedConfigInitHandler failed
    java.lang.StackOverflowError #
    then I have come to Point that there is somthing wrong with Cach Control Configuration .
    with errors as follows
    An error occurred in the core cache "region 'XCM_SESSION_SCOPE' already exists in cache"#
    An error occurred in the core cache "region 'XCM_APP_SCOPE' already exists in cache"#
    Initalization of com.sapmarkets.isa.core.xcm.init.ExtendedConfigInitHandler failed
    I tried to clear the cache through http://localhost:50000/b2c_b2cApp/admin/index.jsp
    but the page comes blank without any links
    Initially this was not the Problem , now how it is Coming !!!! But as far as i know , once our Development Server was Restarted .( not just j2ee Server)
    Now after that whatever new EAR i m building ( locally) is not coming Properly
    Please can u throw some Light on that.
    pls help me , i believe , u all can solve my problem
    Please Reply Soon
    Thanks & Regards
    Ravi Sah

    Hi
         can some body look at this issue .
    I will be very thankful to you all.
    Thanks & regards
    Ravi Sah

  • Whenever I try to open an app on my Mac it just bounces on my dock and then settles. At first it bounced the disappeared, and then I had to reinstall it, but now it just bounces. Any help?

    Whenever I try to open an app on my Mac it just bounces on my dock and then settles. At first it bounced the disappeared, and then I had to reinstall it, but now it just bounces. Any help?

    Try one or more of the following:
    > Restart Your Computer
    > Uninstall and Reinstall iTunes
    > Reset iPod (nothing will get deleted)
    > Look in My Computer (Windows 98, XP) OR Computer (Windows Vista or 7) and see if your iPod is in there.
    In some cases (and this happened to me before) plug your iPod in and out your computer a few times. It should get it going. If not, then I strongly recommend starting your computer.

  • I upgraded my iPhone last night but at the end of the upgrade the Contacts were synched. This has replaced my iPhone contacts with my Windows contacts. Is there a way to restore my iPhone contacts?

    I upgraded my iPhone last night. At the end of the upgrade iTunes synched the Contacts, but all my iPhone contacts have been deleted and replaced by the Windows contacts on my computer. Can the iPhone contacts be restored?  The latest backup shows the time to be after the sync so we tried to restore the iPhone but only the Windows contact list is there. Can you access any previous backup files anywhere?

    Hello ymprice91277, let's solve this mystery! To unlink the Facebook contacts go to Settings, Facebook, turn off contacts.
    WiltonA_VZW
    VZW Support
    Follow us on twitter @VZWSupport

  • Dialog box says-"Bridge encountered a problem and is unable to read the cache. ... purge central cache" For one, I can't find the central cache and two, I've purged the cache in bridge. Shouldn't that help.

    Our Bridge has been acting very strange. It keeps giving a dialog box of "Bridge encountered a problem and is unable to read the cache. ... purge central cache" For one, I can't find the central cache and two, I've purged the cache in bridge. Shouldn't that help? There's just all kinds of weird stuff going on. I suppose it does have something to do with the central cache, so maybe someone can tell me where to find it.
    When I use the burn/dodge tool sometimes it drags and staggers and takes forever to complete whatever I'm burning/dodging. When I try to delete an image from the dock, it won't disappear but it won't display either.
    Any help would be appreciated.

    The Central Cache is the Bridge Cache.
    It's referred as the Central Cache to differentiate it from the individual folder's cache or even the individual image cache.

  • How can I delete unwanted/inaccurate email addresses from the iPhone cache?

    How can I delete old / unwanted / inaccurate email addresses from the "suggestions cache" that keep popping up the moment I type in the first letter in on a fresh email address box?
    This is driving me mad! The London Store Apple Genius can only suggest either deleting my email account and then recreating it or totally reseting the iPhone to the original factory settings and then resynching it to my MacBook Pro to get everything back into the iPhone. However, he's not sure that this cache is not stored on the MacBook Pro during synching somewhere so I may well end up where I started! Allan Sampson answered my last "Help" topic, so I'm hoping he can beat the Apple Store Genius again!
    Over to the Forum and Allan.

    Hi Kelvin and everyone with the same problem.
    This used to be me and the painful solution ( for PC users ) is:
    Buy a Mac!
    Everything "talks" to each other and it all interfaces fine with the business world.
    Once you've adjusted to the ( much better ) Mac "culture" it all becomes clear and the
    backup service and advice is second to none. Not only that, everything works!
    Hope this helps......

  • Forcing the disk cache to be written to disk

    Hi all. We are looking for a way to insure the content of the icommon in the ufs on disk as we need to read it. However, calling sync is async and does not seem to provide what we're looking for. When a file is updated, created all the information is not immediately written to disk. It is kept in the unified memory and later flushed to the disk cache which then writes it down on the media. However, since we're reading the hot FS we need to force ALL file metadata AND data to be written to the media.
    Seems like when I run the command 'ff', probably as a side-effect, this is happening. ff takes too long and is at the FS level. Is it a side-effect, or does 'ff' calls a specific func (truss did not reveal anything useful).
    Thanks all and best regards.

    I. What is really happening under the hood?
    1.sync(2) passes execution into kernel mode
    2.kernel function syssync() is called then
    3.then vfs_sync(0)
    4.eventually ufs_sync(vfsp,�) is called with NULL as a first argument
    in case of NULL(as a 1st arg) ufs_sync() just schedules but does not necessarily completes the writing of ufs metadata before returning.
    So, this behavior completely matches with one described in man section for sync(2).
    II. directio does not solve your problem because it affects only the way the file data( not metadata) goes to disk. ( read directio(3C) and mount_ufs(1M) carefully )
    III. What you really need
    May be it seems to be a heavy weapon for you but one of the possibilities is:
         To write a loadable system call (loadable kernel module) that will
    invoke ufs_sync() with proper arguments (non NULL vfsp) for mounted file
    system of interest.

  • Lifetime of Object in the Object Cache on selects

    On inserting an object (corresponding to user-defined SQL type), I can use a stack-allocated struct for the object and its indicator structure, and pass these addresses to OCIBindObject.
    But on selecting an object (by value, not ref), the doc says one must let the OCI runtime allocate the object struct and its indicator struct in the Object Cache, and call OCIObjectFree when done with the object. But it's unclear whether one needs to free the object for each row of the result set processed, or only once all rows have been processed.
    By looking at the cache-allocated addresses of the object/indicator structs, it looks like the same "instance" in the cache is re-used for every row (addresses remain the same for every row). This tends to indicate the same object is reused, as opposed to a new one being allocated. I added a single OCIObjectFree after processing all rows, and things look OK... (read no crash, no complain/error from OCI).
    But I feel a little uneasy about this, because unlike in my case, when the object has secondary allocations associated to it (because the type contains strings, collections, or other objects members), if somehow I need to OCIObjectFree for each row, and I don't, then I'm leaking all the secondary storage from previous instances of the object.
    Can someone please shed some light on the subject? I find the OCI doc often a little sketchy on these very important lifetime issues. I'd appreciate some advice from someone with Object-Relational aspects of OCI.
    Thanks, --DD                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

    when I have to select objects by value :
    * I let oci allocate objects in the object fetch arrayI don't think we have a choice here in fact. At least the way I interpret the doc.
    * I have different addresses for every entry in the array after the fetchBut even an array fetch is done is a loop. You fetch 10 rows at a time, and you get say 25, so you have 3 iterations / fetches.
    Of course the 10 object entries of a given fetch are different, but is the address of the 6th object the same in the first, second, and third fetch?
    * I call OCIObjectFree() for every entry of the
    array.If I have retreived object based attributes
    from the object, i call OCIObjectFree() on the
    object attributes recursively.To made it easier to understand my question, please find below the pseudo-code of what I'm doing (all error checking and most boiler plate arguments OCI requires are omitted, to concentrate on the control flow and the calls made)
    create type point_typ as object (
      x binary_float, y binary_float, z binary_float
    create table point_typ_tab (
      id number, point point_typ
    struct point_typ { // C-struct matching SQL User Type
        float x, y, z;
    struct point_ind { // indicator structure for point_typ
        OCIInd self, x, y, z;
    static void select_objects() {
        Environment env(OCI_OBJECT);
        env.connect(zusername, zpassword, zdatabase);
        OCIHandleAlloc(OCI_HTYPE_STMT);
        OCIStmtPrepare("SELECT id, point FROM point_typ_tab");
        ub4 id = 0; OCIDefine* define_id = 0;
        OCIDefineByPos(&define_id, 1, &id, sizeof(id), SQLT_UIN);
        OCIDefine* define_pt = 0;
        OCIDefineByPos(&define_pt, 2, 0, sizeof(point_typ), SQLT_NTY)
        OCIType* point_typ_tdo = 0;
        OCITypeByName("POINT_TYP", OCI_DURATION_SESSION,
            OCI_TYPEGET_ALL, &point_typ_tdo);
        point_typ* p_pt = 0;     ub4 pt_size = 0;
        point_ind* p_pt_ind = 0; ub4 pt_ind_size = (ub4)sizeof(point_ind);
        OCIDefineObject(define_pt, point_typ_tdo,
            (void**)&p_pt, &pt_size, (void**)&p_pt_ind, &pt_ind_size);
        sword rc = OCIStmtExecute(0/*row*/);
        while (OCIStmtFetch2((ub4)1/*row*/, OCI_FETCH_NEXT) has data) {
            if (p_pt_ind->self == OCI_IND_NOTNULL) {
                // Use p_pt. Value of p_pt same for all iteration
        OCIObjectFree(p_pt);
        OCIHandleFree(OCI_HTYPE_STMT);
    }As you can see, I do a single OCIObjectFree after the loop, and as mentioned previously, p_pt's value (a point_typ address) remains identical during the iteration, but the values in that memory chunk do change to reflect the various point instances (so each point instance overlays the previous one).
    Are you saying I should be calling OCIObjectFree(p_pt); inside the loop itself?
    And if so, should I call OCIObjectFree(p_pt); even if the indicator tells me the point is atomically null?
    That what I do in OCILIB, and it doesn't seem to leak.Reading your post, I'm unsure you free of points in the array after the loop, or inside the loop. I hope the pseudo-code I posted above help to clarify my question.
    BTW, OCCI is just a wrapper around OCI. Because OCCI
    first releases were buggy (it's getting better) and
    just few compilers (and compilers versions) are
    supported, OCI remains a portable and workable
    solution for dealing with Oracle ObjectsAs I wrote in a separate reply in this thread, I also agree OCI is the way to go, for various reasons.
    Thanks for the post. I'd appreciate if you could confirm or not whether what I'm doing is consistent with what you are doing in OCILIB, now that I'm hopefully made my question clearer.
    Thanks, --DD                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Injecting a bug in the working of the L2 cache of Opensparc T1

    As a part of my research work, I need to inject a bug in the operation of the L2 cache of Opensparc and then do some analysis of the RTL code based on the outcome of that bug. I have been working on Opensparc T1. I decided to target the writeback operation of the L2 cache.
    I did the following to achieve my end:
    1) I have written a SPARC assembly language program that accesses the L2 cache again and again. Specifically, I change the page size to 4 MB, I do 16 write accesses (sth) operations to addresses 262144 bytes apart (64 * 4 * 1024) to hit the set 0 of bank 0 of L2 cache, then again 16 such accesses to set 1 and 16 such accesses to set 2. This ensures that after 12 accesses there are 4 writebacks on average, for the last 4 accesses to each set. Now I also do a load access (lduh) on one of these addresses which are being written back previously. I check in my program whether I am reading the correct value that I had written for that address.
    2) Normally, what happens is, when my LOAD PCX packet arrives, it undegoes a miss at L2 cache, but it hits in the writeback buffer (that is, the address that I am trying to load is in the writeback buffer at that time). So this LOAD waits in the miss buffer. When the writeback buffer receives the DRAM write ack corresponding to this address, it wakes the correponding miss buffer entry and then this load executes through the L2 cache pipeline to get the correct value and my assembly program execution succeeds.
    3) Right now, I inject a simple bug to make the wbctl_hit_unqual_c2 signal in file "sctag_wbctl.v" stuck at 0, that is the signal that goes from writeback buffer to miss buffer to tell miss buffer that this particular access is a hit in the writeback buffer. Since this signal is stuck-at-0, what I expect is that the miss buffer will insert this particular LOAD as a true miss (one that doesn't depend on any miss buffer/fill buffer/writeback buffer value) and so the miss will be issued to L2 cache pipeline independently and will receive the old value of this address from the DRAM. So my assembly program will fail. That is my expectation.
    4) What actually happens in this case, is that the miss buffer treats the LOAD as a true miss and does issue the READ independently to the DRAM, but the read request goes to the DRAM just after the write request to DRAM for the same address goes to the DRAM from the writeback buffer. As a result, the manifestation I see is that my assembly program terminates with the following error:
    "ERROR : In dram channel 0
    At time 13116747 rd entry 0 which is address = 800086000, has a match with incoming write entry at WR Q location 4
    13116747 ERROR: DRAM Channel 0 RD/WR Sequencing violation
    ERROR: cmp_top.cmp_dram.cmp_dram_mon.dram_mon0: DRAM monitor exited"
    I do not see the RAW hazard error that I was expecting (I was expecting a clean exit of my program with fail, that is inside program the value read will be compared with value expected by CMP and that will result in fail), but instead I see the above from the DRAM monitor code. Is this what I should be seeing? Is this read/write sequencing error equivalent to the RAW hazard that I am trying to create?
    5) I tried to delay the write request to DRAM for this address a little, so that my read request will end up reaching the DRAM first and get serviced with the old value, so that my bug manifestation will be as I wanted. I tried assigning a delay to the continuous assignment of signal "can_req_dram" in file "sctag_wbctl.v" so that the write request issued from writeback buffer to DRAM will be delayed till after the read request issued for true miss on that address from miss buffer. But that is not happening. This rd/wr sequencing is all that I can get.
    Could anyone throw some light on this? Also maybe it can be that the actual RAW hazard is happening in this case, but the program is getting terminated before giving the expected result because the DRAM monitor is written to catch such sequencing errors and terminate early? Also if anyone can suggest a way of delaying writeback for this particular address so that the write request reaches DRAM after the read. Please help.

    Hi Avi,
    Because you didn't grant access permission to other users on the NESTED_TABLE_TYPE_NAME (maybe?)Of course, I remember to grant (EXECUTE) privilege on the nested table type to other users. The reason are not synonyms also, for Oracle doesn't allow to access these types through synonyms, unfortunately.
    And the applications can be tons ;-), I use it to implement regular expression functionality in the databases prior to 9i.
    Regards,
    Roman

  • Putting a object on the coherence cache

    Hi All,
    Is there a better way of doing the following:
    In a multi-threaded region of code I perform the following sequence of steps:
    1. Use a filter to check if object foo already exists on the cache.
    2. if the result set of the filter is null, I perform a lock Cache.lock(foo.getUniqueID);
    3. I put the object foo on the cache Cache.put(foo);
    Basically I am trying to avoid another thread from overwriting the existing cache object.
    Is there a better way to achieve this ?
    regards,
    Ankit

    Hi Ankit,
    You can use a ConditionalPut EntryProcessor http://docs.oracle.com/cd/E24290_01/coh.371/e22843/toc.htm
    Filter filter = new NotFilter(PresentFilter.INSTANCE);
    cache.invoke(foo.getUniqueID(), new ConditionalPut(filter, foo));An EntryProcessor will take out an implicit lock on the key so no other EntryProcessor can execute on the same key at the same time. The ConditionalPut will apply the specified Filter to the entry (in this case to check that it is not present, and if this evaluates to true then will set the specified value.
    JK

  • Forms/Reports: Role of the Database cache and Web cache

    Hello oracle experts,
    I am running a purely Forms and Reports based environment (9iAS).
    My question are:
    a. Is it possible to use features from the Web Cache and
    Database Cache to boost the performance of my applications?
    b. Are all components monitorable from the OEM?
    Please guide me so that i can configure my OEM to monitor my
    forms and reports services.
    thanks in advance for your reply
    Kind regards
    Yogeeraj

    Hi BradW,
    The way this is supposed to be done in Web Cache is by keeping separate copies of a cached page for different types of browsers distinguished by User-Agent header.
    In case of cache miss, Web Cache expects origin servers to return appropriate version of the page based on browser type, and the page from the origin server is just forwarded back to browser.
    Here, if the page is cacheable, Web Cache retains a separate copy for each type of User-Agent header value.
    And when there is a hit on this cached page, Web Cache returns the version of page with the User-Agent header that matches the request.
    Check out the config screen titled "Header Association" for this feature.
    About forwarding requests to different origin servers based on User-Agent header value, Web Cache does not have such capability.

  • Unused tables in the buffer cache

    I have a program that queries the V$BH view every hour and stores the name of every table in the buffer cache. There is a set of tables that are never used which are appearing in the buffer cache every hour. I did a case insensitive search of the V$SQL and V$SQLTEXT views for these table names, but found nothing. How can tables be put in the buffer cache if there is no SQL statement refering to them? I'd like to find the session and program which is using these tables that no one is supposed to be using.
    Kevin Tyson

    This can be due to recursive SQL. Means the SQL not fired directly by the app but fired by oracle to satistfy some requirment. It can be related to some system tables or other users tables.
    Example of system tables:
    Oracle use system tables to reflect the current state of the database like when you insert records it update the extent information and when you create the object it update the data dictionary so that object reflect in that etc etc etc
    Example of users tables:
    you fired an insert statement to insert one record in table emp but to insert that record oracle has to check the state of foreign key data by querying dept and other tables so enev if you didn't specify dept table oracle use it internally to check the integrity constraint.
    Daljit Singh

  • Since I updated my Creative Cloud desktop App to its last version, my files are no longer synchronized. I received the "fail to synch files" and "server error" messages.

    Since I updated my Creative Cloud desktop App to its last version, my files are no longer synchronized. I received the "fail to synch files" and "server error" messages.

    Hi, Jeff.
    I'm not on a network and I didn't change anything on my secutiy setups, so I got in touch to the customer support. They checked my computer and found nothing wrong, so they uploded some log files to analyse the case. I'm waiting for a answer.
    Thanks for the tips.

Maybe you are looking for

  • USB External Drive Backup & Erase Woes

    I have been using a Lacie 500GB USB external drive allocated as a backup disk for time machine. Over the last couple of weeks, the disk has been spontaneously ejecting itself from the desktop. This could happen when not backing up to time machine, or

  • Pls help me to UPDATE me Ipod touch.

    hi, I wanted to update me ipod touch (first genetation) with 3.1.2 software update. I bought the update... but when i want to install it, it says i have an internet interupption and try again later. I have tried like 10 times now but all just the sam

  • Its very urgent

    DECLARE e_invalid_department EXCEPTION; cursor c1 is select sal,deptno from emp FOR UPDATE OF sal NOWAIT; BEGIN for emp_rec in c1 loop UPDATE emp SET sal = sal + (sal * DECODE(mgr,NULL,100, DECODE(emp_rec.deptno,10,10, 20,10, 30,15, 40,15, 20)) / 100

  • Split only ONE footnote?

    Hi, I have a long document with a bunch of footnotes. All the footnotes are fairly short except for one which I would like to split to the next page. Is there any way to split just ONE footnote? I have tried the "split footnotes" option but it splits

  • Third party mouse on Intel Macs

    Hello, does a third party mouse driver (Microsoft or Logitech) written for PowerPC processor work correctly on Intel-based Macs?