Checkpoints perf improvment in 4.4 vs 3.2

Hi,
Can someone tell me wether there has been any perf improvment done in the checkpoint processing between 4.4 and 3.2 releases.
If yes, I'm interested in getting some details/pointers onto what has been done.
If no, are there any plans to do some perf improvments ?
Regards,
Jerome

hello,
I checked into this further and some additional information
follows. If there is something in particular that you are
interested in, let me know.
In general there have been a variety of checkpoint improvements
over the years, mostly concentrating on keeping checkpoints from
impacting normal work processing and in avoiding writing pages
that don't need to be written to implement transactional guarantees.
The buffer manager was significantly rewritten in 4.1, which
included how checkpoints work. The specific change there was to have
the checkpoint code always complete. In previous releases the
checkpoint could return DB_INCOMPLETE if the checkpoint thread
could not get control of a buffer due to high traffic on that
buffer by other threads.
After 4.1 the buffer manager used multiple mutexes rather than a
single mutex, so access to the buffer pool by other threads should
be greater during a checkpoint than it was previously.
SR 11031 addressed a problem where a mutex was held in the buffer
manager while the fsync system call was issued locking out some types
of accesses to the buffer pool.
The next release will have additional changes for checkpointing. There
are improvements in the way dirty buffers are located during a
checkpoint which can have noticeable improvements with large buffer
pools with a majority of clean buffers.
thanks,
Sandra

Similar Messages

  • Comparison of 11g "10x" perf improvement with PL/SQL

    Could not find any refs. Only thing was some older ask tom articles stating that it would only be suitable for corner cases where pl/sql got ugly, due to massive perf differences. What is the dif in 11g - still the case?

    Hi Peter:
    Oracle 10g JVM using NCOMPed libraries have similar performance than 11g JITed classes.
    My experience with [Lucene Domain Index|http://docs.google.com/View?id=ddgw7sjp_54fgj9kg] which is certified against 10g/11g shows similar performance result when indexing big tables.
    PLSQL is very fast in jobs that they can do ;) but when you are trying to work with multimedia types, free text search, XML, REST and other environments you can't get better performance than Java or worst than this, you can't do the JOB in PLSQL.
    For example you can implement a REST WS using PLSQL which returns a JSON object serialization, but if you want to implement a pluggable REST representation, routing, effective logging and many other REST services included by default in all REST libraries it will be a nightmare.
    Best regards, Marcelo.

  • Leopard Performance vs. Tiger perf.

    Hi - I'm running 7.2 on a G5 quad core, currently with tiger. Does anyone have any info about any possible perf. improvement in stability and speed, when running 7.2 in the latest version of Leopard??
    I know some folks running L8 on Leo, saw performance improvements in the later 10.5 releases...
    thanks,

    To be clear, if you're on 7.2.3, & 10.4.11, & are considering upgrading to 10.5.6, you shouldn't have any problems & might even notice some slight improvements.
    Having said that, there's always someone who has problems with an upgrade; but generally speaking, I don't think you'll find changes to performance an issue. I think that's what you wanted to know…

  • Premiere Pro CS6: GTX 660 ti ($300)  vs. GTS 450 ($100) and other thoughts on upgrading HW

    I've been wanting to make some major upgrades to my hardware, but it just doesn't seem worth it yet...even after almost 4 years. I ultimately decided to "rent" a new video card and run some tests. Here is some background info on my upgrade though process and some results comparing the video card performance.
    Disclaimer: I'm not a hardware expert, but I'm not completely clueless (I think). Your input/insight is welcome.
    My system (purchasd 2/2009)
    i7 920
    GTS 450 (1GB RAM)
    12 GB 1333 RAM
    Samsung SATA II 128 GB SSD (OS/apps)
    5x 1TB 7200 RPM drives in RAID 0 (with accompany slow/cheaper 2TB backup drives)
    Some upgrade options I am considering
    Sandy Bridge 3930 - but it's $560 w/o cooling and would require a new, more expensive motherboard, new ram, cooling, etc.
    Ivy Bridge 3770, but I keep reading that that an overclocked 920 isn't that much different in perf (in fairness mine isn't oc'd). I did find a MB that would work for only $90. So I could make this upgrade for just under $400 (RAM would stay the same).
    Wait for Haswell, but i could be another 9 months and it's supposed to only give maybe a 10% perf gain over IB. It's more focused on mobile - less power, integrated graphics, etc.
    High-end Xeons are totally off the table. $/buck is waaaay too low.
    Video card and benchmark reviews/problems
    So I thought I'd first try getting a new video card. I see conflicting benchmarks. This site (the one that provides the CUDA.exe hack) notices very little difference between most GTX cards in perf for their benchmarks. The PPMB5 site shows significant differences between say the GTX 680 and lower end cards. But are these really accurate?
    The GTX 680 is almost $500, so I opted for the 660 ti at $300 to see if I could get a noticable perf gain. It seemed like the best $/buck card and wouldn't require me to get a new power supply.
    Another reason I wanted to do my own tests: None of the benchmarks I've seen actually mention the type of footage used. I care about footage from the Canon MKII-III, or similar footage. I definitely do not care about things like exporting to MPEG 2.
    I did some very unscientific benchmarks, but they were real world for me. First my "problem" areas.
    Performance problem areas
    #1 - Time-lapses consisting of 1080 (height) JPEGs and 2160 (height) JPEGs don't always play smoothly (larger 2160s almost never do). I read adding more VRAM might help. The 660ti has 2x the RAM as my current video card.
    #2 - Split screen sequences (up to 9 clips simultaneously) don't play smoothly.
    #3 - Scenes where I speed up a clip to 1000x don't always play smoothly. (Although upgrading from CS5 to 6 actually seems to have solved this issue, I couldn't get it to repro any longer).
    #4 - Export to h.264 could be faster. I do this a lot, but mostly because it's how I sometimes make proxies because of problems around #1-2 (works fine - used to use CineForm but it always crashed Premiere and these work for my needs). This is typically my final export as well for posting on sites like Vimeo.
    #5 - Timeline rendering could be faster, although I don't do this a lot and if I do it's simple, not a bunch of crazy effects. E.g. use unsharp mask. This is pretty low pri for me though because I think timeline rendering is a bad idea. Once you do it, if you even move the clip you have to render again.
    Some simple bottleneck analysis first:
    Disk queue length sometimes is just over 1 on 1 disk in my RAID array during TL playback. Might slow things down slightly. Not an issue during export.
    Processor never seems to get pegged in any case.
    RAM is never maxed out, but it starts to go to Premiere limits (10 GB that I've set) after playing through several time-lapses (I'm just now noticing this). Choppiness starts well before RAM is even near that on some clips.
    Tests/results:
    NOTE: I do run the 660ti in a PCIe 2 x16 slot. Let me know if you think it would even matter to run in a PCIe 3.0 slot. My MB doesn't have one.
    #1 Time-lapse smoothness - didn't improve with the 660. Moving the 1080 size JPEG TLs to my SSD did help some problem TLs play smoothly however.
    #2 Split screen. Did a test with a 9-clip-at-the-same-time sequence. No improvement with the 660ti.
    #3 Clips speed up 1000x - could not repro the problem now that I run CS 6 vs. 5 on either card.
    #4 - Export to H.264 1080p @23.9x fps.
    Export 5:30 clip of 5D MKIII footage + H.264 proxies:
    GTS 450 - 9:14
    660 Ti - 8:30
    Export 1.5 minute clip of large time-lapses (JPEGs that are 2160 high):
    GTS 450 - 9:35
    660 Ti - 7:00
    Export a 2 minute clip of just MKIII footage
    GTS 450 - 2:45
    660 Ti - 2:45
    #5 Timeline render with simple image correction effect
    Timeline render short 5D MKIII clip with unsharp mask applied:
    GTS 450 - 1:10
    660 Ti - 1:19
    Conclusion:
    The 660ti ($300) showed marginal improvements in exporting h.264 against my GTS 450 ($100) and did not address my other issues. Definitely not worth it for the type of work I do.
    Moving my time-lapse JPEGs to an SSD helps play the 1080p versions back smoothly. The 2160p larger versions still lag. Maybe more RAM would help? They still start off choppy and then acquire more and more RAM, so not sure here. Maybe faster 1600 RAM? I don't know, I doubt it. I may have to just use 1080 versions or make proxies.
    I don't see a pegged CPU much if at all, so upgrading to an Ivy Bridge 3770 doesn't seem like it'll help much if at all.
    I did end up buying 2x256 GB SATA III SSDs (only $169 each) that I'll run current projects off of, or at least time-lapse sequences (RAID 0). My motherboard doesn't have an SATA III slots, however, so I won't see the full power of these, but not sure I'll need it. Again I'm not seeing a clear disk issue either from the perf monitoring.
    I suspect many of these problems are still with the software and how it takes advantage of my hardware, but I'd love more insight.
    Generally I make things work and I don't have any really painful bottlenecks, but I'm always up for perf improvements/doing things faster. It does look like I won't see any major breakthroughs, however by spending $400-$1000 bucks on HW upgrades.
    Thoughts?
    Luke
    Blog  |  Photography  |  Vimeo

    Thanks for the resonpose Harm. Inline.
    Harm Millaard wrote: SYSTEM: It is an older system, about the same I had in the form of 'Harm's Beast', although I have a much beefier disk setup, more memory and OC'ed to 3.7 GHz, in combination with a GTX 480. Not much you can do about this system, apart from upgrading memory to 24 GB but the major drawback is that those investments will not carry over to a new system, at least not easily. [Luke] From your description of your system it sounds like 4 things could indeed be upgraded and carried over to a new system. 1) OC the processor (e.g. purchase a generic water cooler for ~$100), 2) Improve the Disk setup, 3) Upgrade the video card, 4) Add more/faster RAM.
    I've seen in some benchmarks that an OC'd 920 is not so dissimilar to an OC'd 3770K. The latter is faster, but it isn't a huge difference. The larger question still remains - will any/all of these upgrades yield large performance gains and solve all/a higher percentage of my problems? Or do I have a decent sweet spot of a system and should wait for the software (e.g. MPE evolution in CS7-8) to catch up and take better advantage of what I have? Like I said from doing some rudimentary performance monitoring, I'm not seeing a pegged CPU (just a brief spike here/there), I'm not seeing disk transfer at capacity (although 1 disk has a slightly > 1 queue length at times), I'm not seeing in all cases over-utilization of memory, etc. (except higher RAM usage is seen albeit staggered for large JPEG time-lapse sequences, but I see choppiness well before RAM usage gets to 10 GB).
     VIDEO: You correctly point out that the GTX 680 shows in the MPE graph on the PPBM5 website much better results than other cards. But keep in mind that most 680's are used in new systems, often with the latest CPU's and fast memory. I'm convinced that a 680 is not noticeably faster than a 580, because they have the same memory bandwidth, but it looks that way because they are often accompanied by hexa core i7-39xx CPU's with large amounts of memory.  [Luke] Good point - potentially further evidence that the video card doesn't make a big difference? At least not enought to justify 5x the cost (e.g. $500 680 vs. $100 450).  This would be consistnet with what Studio 1 Productions has seen. The GTS 450 has a memory bandwidth of 86 GB/s, the 660 Ti has 144.2 GB/s, so the latter is significantly faster as you have shown in some of your tests. [Luke] The only test I would characterize closer to having a significant increase using the 660 would be exporting large JPEG time-lapses to H.264, where it was a good 27% faster. The rest seemed more marginal or did not change. TESTING: You don't mention to what format you exported and with what resolution and framerate. Hardware MPE will come into play when you have rescaling, frame blending, blurring and stuff like that occurring. If you export to the same frame size and frame rate as your source and no blurring occurs, then exporting is purely a CPU matter and the video card has no impact at all. [Luke] See above - H.264, 1080p @23.9x fps. General remarks: I personally consider your 5 disk raid0 setup as pretty risky. You have multiplied the risk of losing all data by a factor 5!. You have no redundancy at all. Even though it is fast, I expect your sustained transfer rates are less than 450 MB/s and when using a 9 clip split screen, it may be too slow with the limited memory and the old CPU you have. You have effectively one single volume for video related editing (apart from the OS disk) and while that makes for easy administration, it still entails the drawbacks of the half-duplex connection of SATA. It might be better to add a couple of HDD's in raid0 for media cache, previews and exports to avoid that limitation. You can always carry those to a new system. [Luke] Yes there is a higher level of risk, but with backups every 30 minutes during project work I yield cheap/easy perf gains for the cost of--at most--30 minutes of work. I've lost no work in the last 4 years, 1 drive failed once while on vacation and I replaced it easily. Anyway backup/data integrity is a different issue separate from performance which I'd like to focus on in this context.
    I get ~420MB/s read with this array (mostly older, blue WD drives and Deskstars). I'm running out of space, so I just ordered 3 x 2 TB WD black drives to replace this with, expecting probably a similar transfer rate. Again though I'm not necessarily seeing disk being a bottleneck in perf mon aside from one disk who's queue length sometimes goes over 1, so we'll see if the newer black drives help.
    I have ordered 2x256 SATA III SSDs to put min my time-lapses on as having them on my current primary SSD seemed to help in some cases.
     Sorry to be so harsh. [Luke] No worries, harsh is OK but I'm still not seeing a clear solution to some of my issues and I'm still not convinced a new system - short of a top-of-the-line SB or Xeon system (both of which are very $$) will be worth the upgrade. When 5 years had passed between my current system and my 2004 system, I feel like the upgrades were much more significant especially for bang/$.
    Luke Humphrey
    Blog | Photography | Cinematography

  • Sun VDI 3.0 patch 2 - linux guest display is laggy/unusable

    Hello.
    We have setup Sun VDI 3.0 to test out ubuntu desktop images. We have it functionaly setup but the lag/redraw is unusable. For example opening, closing and moving windows around has a horrible lag and takes a second or two to redraw. Just doing an "ls" from the command line is noticeably slow. I couldn't figure out any good way to measure the visual slowness I am seeing so I hope my plain english description makes sense.
    The environment is setup as follows
    Solaris 10 update 7 on x64 hardware.
    Sun VDI 3.0 patch 2 (patch 1 behaved the same)
    virtualbox 2.0.10 (version 2.0.8 with patch 1 behaved the same)
    Sun 7110 as storage device
    Desktop Images tried on the desktop
    RedHat 5.3 (32 bit) and Ubuntu 8.04.3 (32 bit)
    We installed the Sun VDI software into the all on one host demo mode. Our host is configured with 24GB of memory (24gb of swap as well per requirements). The virtualbox installs were done on the 2.0.8 version of virtualbox and then imported into the vdi solution. We ran between 2 or 3 vm's at one time so the box still had plenty of ram left since we only allocated 512MB to each vm (tried uppping that to 3GB with no perf improvements).
    The Sun Ray units we tried is a stand alone 270 and 1 2FS unit. Surely i'm not the only one attempting linux desktops. I loaded up an windows XP desktop for testing purposes and it functioned great. Barely noticeable performance difference. Any ideas on what I'm missing?
    Thanks. Deet.

    Dirk.
    Thanks for the input. I made the changes you mentioned and got an order of magnitude improvement! I was guilty of being a Solaris admin looking for a Solaris problem:)
    Deet.

  • Optimal XP Configuration for Workshop

    We've put Workshop 8.1 SP1 on WinXP Pro workstations (2.3GHZ, 1Gb RAM, 20Gb HD). We used them for our week of BEA Portal training. The performance was miserable - it took minutes for jsps to compile and open for debugging - and they locked up or crashed at least twice a day. After class, we upgraded to SP2 (we couldn't during class to avoid conflict with class examples) but have seen no appreciable difference in stability or performance. BEA provides "minimum" requirements (PIII 700Mhz, 1 Gb RAM), which we exceed. Before I go essentially sell my first born to get our PC guys to get me some newer PC hardware (we buy Dell OptiPlex as our corporate standard - can get 3.2GHz w/ 2Gb RAM), are there other tweaks which may help? I've received internal suggestions to possibly get the Dell Precision WinXP workstation to get:
    * A bigger, faster disk (70 Gb, 7200rpm SATA) and create a much bigger swap file (15-20Gb).
    * A dual proc XP box
    But before we sink $3,500 into these, would this help my problem? Or is Workbench just slow and unreliable?

    Mike,
    Have you tried running with SP2 yet? You should notice perf
    improvements with SP2.
    Thomas
    Mike Pinter wrote:
    We've put Workshop 8.1 SP1 on WinXP Pro workstations (2.3GHZ, 1Gb RAM, 20Gb HD). We used them for our week of BEA Portal training. The performance was miserable - it took minutes for jsps to compile and open for debugging - and they locked up or crashed at least twice a day. After class, we upgraded to SP2 (we couldn't during class to avoid conflict with class examples) but have seen no appreciable difference in stability or performance. BEA provides "minimum" requirements (PIII 700Mhz, 1 Gb RAM), which we exceed. Before I go essentially sell my first born to get our PC guys to get me some newer PC hardware (we buy Dell OptiPlex as our corporate standard - can get 3.2GHz w/ 2Gb RAM), are there other tweaks which may help? I've received internal suggestions to possibly get the Dell Precision WinXP workstation to get:
    * A bigger, faster disk (70 Gb, 7200rpm SATA) and create a much bigger swap file (15-20Gb).
    * A dual proc XP box
    But before we sink $3,500 into these, would this help my problem? Or is Workbench just slow and unreliable?

  • [Project Server 2010] Maximum Number of Job processor Threads in the queue settings

    Hello,
    I have a farm with SharePoint 2010 and Project Server 2010 installed. The farm contains:
    2 web front end servers
    3 application servers
    Each servers has 4 processors
    The SQL databases are installed on our database servers.
    The Microsoft Project Server Queue Service 2010 is started on the 2 web front end servers and on 1 applicative server.
    We have two instances of PWA are installed on this farm.
    I have calculated that the number of availabe processors we could have on this farm is : 4 processors X 3 application servers (the ones where the project queue is started) = 12. Is that correct? I'm not sure of the definition of the applicaton
    server that is used in
    this article )
    If I set the Maximum Number of Job Processor Threads (in the queue setting) to 2 per queue and per instance. We could have 2 Job processor X 3 application servers X 2 pwa instance = 12 threads to operate at the same time, is it correct?
    If yes, do you think this is a too high number of threads?
    Thanks for your help
    Aline

    Hi Aline,
    These settings are for fine tuning the servers, and are only one of many performance parameters.  If you notice the response from the server is slow, regarding Queues, then I would think about amending these upwards, but also with a careful eye on processor
    utilisation.  If you up the thread count and the system isn't any more performant, then I would look elsewhere for perf improvements.  Before you do anything of course, baseline the performance of the system.
    Ben Howard [MVP] | web |
    blog |
    book | P2O

  • How to create PARTITION table by?

    I have huge table(1million records) which affect query performance. I notice it may help by partitioning it into two. One contains heavily used records and another the rest. However Partition By Range seems applicable to LESS THAN. What I want to do is something like ' EST_TYP in ('L','N')' etc.
    Anyone can help? Any other advice on perf improvement? What I have done: 1>Reduced the no of EXT. 2>Created Index.
    Thanks in Advance
    Yiguang

    I have huge table(1million records) which affect query performance. I notice it may help by partitioning it into two. One contains heavily used records and another the rest. However Partition By Range seems applicable to LESS THAN. What I want to do is something like ' EST_TYP in ('L','N')' etc.
    Anyone can help? Any other advice on perf improvement? What I have done: 1>Reduced the no of EXT. 2>Created Index.
    Thanks in Advance
    Yiguang

  • Select to create DynaSet & .AddNew taking a lot of time

    Hi Everybody,
    I'm using OO4O to write a lot of data to the db.
    I'm quite new to that stuff so I chose the way to insert new records by "CreateDynaset" and afterwards doing an ".AddNew" on the returned DynaSet which I create with a "Select" - Statement. I now have the feeling that the more records I have the longer the "Select" takes (which is quite logic), which leads me to approx. 2s per "Select"! My question now is: how could get around that long time? Is there maybe another way to fill the DynaSet to improve my performance?
    This Problem appears at approx. 166000 rows, before that amount the app. works fast...
    Could there be any perf. improvement decreasing the fetchsize???Well, maybe this is really easy, but I don't have a clue!!!
    HELP!!
    Thanks everybody
    Marco
    Message was edited by:
    Marco

    Hi Jack,
    you are having morethan 7lack records in z table. it is not good practice fetching all the records into internal table and restricting with where clause in loop statement.
    I hope you already created secondary index combination of primary key.
    check you select query fetching records based on index you have created in ST05.
    Refer below link for your program is using index that you have created.
    Re: Indexing
    Regards,
    Peranandam
    Edited by: peranandam chinnathambi on Apr 7, 2009 8:38 AM

  • Nvidia GTX 560 Ti Twin Frozr driver hanging.. even after fresh OS install

    Dear,
    Sorry for my english 
    I'm facing issue with my Nvidia GTX 560 Ti Twin Frozr II, and I saw other do have... 
    My config :
    When using windows, or games (I'm Crysis addicted ) I get a screen freeze, a blank (black) screen and after seconds a video driver message saying that it recovers the display as it were not responding anymore....
    It happens without reason to me and temperature isn't related as it happen even with a temp of 45°c....
    I've tried :
    Updating driver, downgrading them,
    Re installing Windows (x64),
    Updating motherboard Bios (P55a-UD3R Gigabyte with latest bios version)
    Flashing my OCZ Vertex 3 SSD (a pain ....)
    Upgrading my power supply to a Corsair 750GS
    Nothing corrects the problem.
    At last I 'm looking to upgrade video bios but I don't want to brick my precious video card...
    So I'm requesting your help to use "THE" good file version of video bios using  NVFlash 5.118 for Windows for example.
    Overclocking isn't my goal, stability is...
    Thanks a lot for your help... as I ain't find video bios file on MSI support site....

    You are lightning fast !! 
    880 -> 822
    1050 -> 2000
    I sure have done an error here !!     
    Modification will be done as soon as I get close to my Crysis PC... (ETA 18h00 GMT+1  )
    880 -> 822 won't be noticable
    1050 -> 2000 will result in a massive perf improvement ??
    I red :
    4000M hz !!!!
    I believe it's all wrong...

  • Possible reason for directory freezing...

    First of all, thank you so much for reading my first post, I'm from Mexico and I'm actually working for a bank and I'm starting with this amazing world of LDAP, here's my question:
    Is possible that the Directory Server 5.2 (it's running over SunOS) could get freeze by a large ammount of groups? There is a limit of groups for a LDAP under this version?

    Hi,
    Impact of groups on performance depends on the client access pattern...All I can tell you is that evaluating group memberships with groups containing more than 10000 members is slow.
    AFAIK, you can get perf improvement about groups from the support team, but first, you would need to characterize the problem, i.e identify what LDAP request causes the problem by analysing the directory access log.
    Regards,
    -Sylvain

  • Will URLS (Unified light speed) improves the current app. perf. as well ?

    Hi,
    We are using ABAP webdynpro application that encapsulates Interactive PDF as well. But it's performance is not good.
    Normally it takes 30-50 seconds when user opens the workitem from UWL which further calls the webdynpro application and shows the pdf to user.
    My question is if we go for EHP1 for NW that will give Unified Rendering Light Speed in Web Dynpro ABAP  then will this technology helps to improve the current webdynpro application performance or only the new applications that will be built using this.
    Please suggest me for this or tell me the other way to improve the performance of webdynpro application.
    Thanks,
    Rahul

    Hi Rahul,
    The new light speed rendering engine is the rendering framework to render the Webdynpro applications. Therefore there is nothing where by you specify that a particular application is developed using the Light speed rendering engine.
    In other words to answer your question, EHP1 will improve the performance for all the applications whether developed on EHP1 or developed prior to EHP1.
    Regards
    Rohit Chowdhary

  • Checkpoint info

    Looking at the below log from alert log does it mean that checkpoint took 5 mts?
    Tue Mar  3 11:04:04 2009
    Beginning log switch checkpoint up to RBA [0x1a9d.2.10], SCN: 2337499124
    Tue Mar  3 11:04:04 2009
    Thread 1 advanced to log sequence 6813 (LGWR switch)
      Current log# 1 seq# 6813 mem# 0: /u01/oradata/perf/redo01a.log
      Current log# 1 seq# 6813 mem# 1: /u02/oradata/perf/redo01b.log
    Tue Mar  3 11:09:09 2009
    Completed checkpoint up to RBA [0x1a9d.2.10], SCN: 2337499124

    I see this in ADDM report for that time period:
    FINDING 1: 49% impact (3133 seconds)
    The SGA was inadequately sized, causing additional I/O or hard parses.
    RECOMMENDATION 1: DB Configuration, 49% benefit (3133 seconds)
    ACTION: Increase the size of the SGA by setting the parameter
    "sga_target" to 36000 M.
    SYMPTOMS THAT LED TO THE FINDING:
    SYMPTOM: Wait class "User I/O" was consuming significant database time.
    (12% impact [797 seconds])
    Waits on event "log file sync" while performing COMMIT and ROLLBACK operations
    were consuming significant database time.
    RECOMMENDATION 1: Host Configuration, 14% benefit (923 seconds)
    ACTION: Investigate the possibility of improving the performance of I/O
    to the online redo log files.
    RATIONALE: The average size of writes to the online redo log files was
    52 K and the average time per write was 2 milliseconds.
    SYMPTOMS THAT LED TO THE FINDING:
    SYMPTOM: Wait class "Commit" was consuming significant database time.
    (14% impact [923 seconds])
    I don't really understand why it's doing this or what it means

  • How to improve the event log read performance under intensive event writing

    We are collecting etw events from customer machines. In our perf test, the event read rate can reach 5000/sec when there is no heavy event writing. However, the customer machine has very intensive event writing and our read rate dropped a lot (to 300/sec).
    I understand there is IO bound since event write and read will race for the log file, which is also confirmed by the fact that whenever there is a burst of event write, a dip of event read happens at the same time. Therefore, the event read cannot catch up
    the event write and the customer gets lagging behind logs.
    Note that most of the events are security events generated by windows (instead of customers).
    Is there a way to improve the event read performance under intensive event write? I know it is a hard question given the theory blocker just mentioned. But we will lose customers if there is no solution. Appreciate any clue very much!

    Hi Leonjl,
    Thank you for posting on MSDN forum.
    I am trying to invite someone who familiar with this to come into this thread.
    Regards,
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Confused about transaction, checkpoint, normal recovery.

    After reading the documentation pdf, I start getting confused about it's description.
    Rephrased from the paragraph on the transaction pdf:
    "When database records are created, modified, or deleted, the modifications are represented in the BTree's leaf nodes. Beyond leaf node changes, database record modifications can also cause changes to other BTree nodes and structures"
    "if your writes are transaction-protected, then every time a transaction is committed the leaf nodes(and only leaf nodes) modified by that transaction are written to JE logfiles on disk."
    "Normal recovery, then is the process of recreating the entire BTree from the information available in the leaf nodes."
    According to the above description, I have following concerns:
    1. if I open a new environment and db, insert/modify/delete several million records, and without reopen the environment, then normal recovery is not run. That means, so far, the BTree is not complete? Will that affact the query efficiency? Or even worse, will that output incorrect results?
    2. if my above thinking is correct, then every time I finish commiting transactions, I need to let the checkpoint to run in order to recreate the whole BTree. If my above thinking is not correct, then, that means that, I don't need to care about anything, just call transaction.commit(), or db.sync(), and let je to care about all the details.(I hope this is true :>)
    michael.

    http://www.oracle.com/technology/documentation/berkeley-db/je/TransactionGettingStarted/chkpoint.html
    Checkpoints are normally performed by the checkpointer background thread, which is always running. Like all background threads, it is managed using the je.properties file. Currently, the only checkpointer property that you may want to manage is je.checkpointer.bytesInterval. This property identifies how much JE's log files can grow before a checkpoint is run. Its value is specified in bytes. Decreasing this value causes the checkpointer thread to run checkpoints more frequently. This will improve the time that it takes to run recovery, but it also increases the system resources (notably, I/O) required by JE.
    """

Maybe you are looking for

  • Problem while MIGO

    Hi all, I am trying to receive a purchase order but it says "No goods receipt possible for this PO". The only thing was not normal, while putting the confirmation it said header data is not complete but i saved it anyways. Please let me know what sho

  • I am stuck in the PC mode. When I start my computer starts as Windows can't get  back to Mac. How do I open Mac?

    I am obviously stupid! I just bought an iMac 27 was working in Mac mode with parallel making me able to go to bootcamp i.e PC mode.. Somehow managed to open my computer in PC mode and I do not know how to go back to Mac?? I was having problems printi

  • I cannot Publish my site after using the Publish Public Site Manager. An error, with no explanation is returned.

    The problem we're having has to do with transitioning our existing Easter I Tunes U site over to an iTunes U Public site as the instructions below tell us to do. -          I can access the itunes U Public Site Manager fine with my ID. -          I w

  • Customizing Safari with iPad

    Hi- I think I know the answer to this one, but I want to confirm. My friend recently bought an ipad and I was helping him learn how to use it. He wanted to change the home page in Safari to go to his MSN mail account instead of to Apple. Can you chan

  • Trying to make a variable with for loop

    Trying to pick up a movieClip  instance from my library and set it on stage and make it clickable, (dropdown/dropup) menu).  Can anyone help with what I am doing wrong? var MenuItem1:Array = new Array("text1", "text2", "text3","text4"); CreateM(MenuI