Performance with Dedup on HP ProLiant DL380p Gen8

Hi all,
it is not that i haven't been warned. It is just that i simply do not understand why write performance on the newly created pool ist so horrible...
Hopefully, i'll get some mor advise here. Some basic figures:
The machine is a HP ProLiant DL380p Gen8 with two Intel Xeon E5-2665 CPUs and 128GB Ram.
The storage-pool is made out of 14 900GB SAS 10k disks on two HP H221 SAS HBAs in two HP D2700 storage enclosures.
The System is Solaris 11.1
root@server12:~# zpool status -D datenhalde
pool: datenhalde
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
datenhalde ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c11t5000C5005EE0F5D5d0 ONLINE 0 0 0
c12t5000C5005EDBBB95d0 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
c11t5000C5005EE20251d0 ONLINE 0 0 0
c12t5000C5005ED658F1d0 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
c11t5000C5005ED80439d0 ONLINE 0 0 0
c12t5000C5005EDB23F1d0 ONLINE 0 0 0
mirror-3 ONLINE 0 0 0
c11t5000C5005EDA2315d0 ONLINE 0 0 0
c12t5000C5005ED6E049d0 ONLINE 0 0 0
mirror-4 ONLINE 0 0 0
c11t5000C5005EDBB289d0 ONLINE 0 0 0
c12t5000C5005EDB9479d0 ONLINE 0 0 0
mirror-5 ONLINE 0 0 0
c11t5000C5005EDD8385d0 ONLINE 0 0 0
c12t5000C5005ED72855d0 ONLINE 0 0 0
mirror-6 ONLINE 0 0 0
c11t5000C5005ED8759Dd0 ONLINE 0 0 0
c12t5000C5005EE3AB59d0 ONLINE 0 0 0
spares
c11t5000C5005ED6CEADd0 AVAIL
c12t5000C5005EDA2CD5d0 AVAIL
errors: No known data errors
DDT entries 5354008, size 292 on disk, 152 in core
bucket allocated referenced
refcnt blocks LSIZE PSIZE DSIZE blocks LSIZE PSIZE DSIZE
1 3,22M 411G 411G 411G 3,22M 411G 411G 411G
2 1,28M 163G 163G 163G 2,93M 374G 374G 374G
4 440K 54,9G 54,9G 54,9G 2,12M 271G 271G 271G
8 140K 17,5G 17,5G 17,5G 1,39M 177G 177G 177G
16 36,1K 4,50G 4,50G 4,50G 689K 85,9G 85,9G 85,9G
32 6,26K 798M 798M 798M 277K 34,4G 34,4G 34,4G
64 1,92K 244M 244M 244M 136K 16,9G 16,9G 16,9G
128 56 6,52M 6,52M 6,52M 10,5K 1,23G 1,23G 1,23G
256 222 27,5M 27,5M 27,5M 71,0K 8,80G 8,80G 8,80G
512 2 256K 256K 256K 1,38K 177M 177M 177M
1K 4 384K 384K 384K 6,00K 612M 612M 612M
4K 1 512 512 512 4,91K 2,45M 2,45M 2,45M
16K 1 128K 128K 128K 24,9K 3,11G 3,11G 3,11G
512K 1 128K 128K 128K 599K 74,9G 74,9G 74,9G
Total 5,11M 652G 652G 652G 11,4M 1,43T 1,43T 1,43T
root@server12:~# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
datenhalde 5,69T 662G 5,04T 11% 2.22x ONLINE -
root@server12:~# ./arc_summery.pl
System Memory:
Physical RAM: 131021 MB
Free Memory : 18102 MB
LotsFree: 2047 MB
ZFS Tunables (/etc/system):
ARC Size:
Current Size: 101886 MB (arcsize)
Target Size (Adaptive): 103252 MB (c)
Min Size (Hard Limit): 64 MB (zfs_arc_min)
Max Size (Hard Limit): 129997 MB (zfs_arc_max)
ARC Size Breakdown:
Most Recently Used Cache Size: 100% 103252 MB (p)
Most Frequently Used Cache Size: 0% 0 MB (c-p)
ARC Efficency:
Cache Access Total: 124583164
Cache Hit Ratio: 70% 87975485 [Defined State for buffer]
Cache Miss Ratio: 29% 36607679 [Undefined State for Buffer]
REAL Hit Ratio: 103% 128741192 [MRU/MFU Hits Only]
Data Demand Efficiency: 91%
Data Prefetch Efficiency: 29%
CACHE HITS BY CACHE LIST:
Anon: --% Counter Rolled.
Most Recently Used: 74% 65231813 (mru) [ Return Customer ]
Most Frequently Used: 72% 63509379 (mfu) [ Frequent Customer ]
Most Recently Used Ghost: 0% 0 (mru_ghost) [ Return Customer Evicted, Now Back ]
Most Frequently Used Ghost: 0% 0 (mfu_ghost) [ Frequent Customer Evicted, Now Back ]
CACHE HITS BY DATA TYPE:
Demand Data: 15% 13467569
Prefetch Data: 4% 3555720
Demand Metadata: 80% 70648029
Prefetch Metadata: 0% 304167
CACHE MISSES BY DATA TYPE:
Demand Data: 3% 1281154
Prefetch Data: 23% 8429373
Demand Metadata: 73% 26879797
Prefetch Metadata: 0% 17355
root@server12:~# echo "::arc" | mdb -k
hits = 88823429
misses = 37306983
demand_data_hits = 13492752
demand_data_misses = 1281335
demand_metadata_hits = 71470790
demand_metadata_misses = 27578897
prefetch_data_hits = 3555720
prefetch_data_misses = 8429373
prefetch_metadata_hits = 304167
prefetch_metadata_misses = 17378
mru_hits = 66467881
mru_ghost_hits = 0
mfu_hits = 64253247
mfu_ghost_hits = 0
deleted = 41770876
mutex_miss = 172782
hash_elements = 18446744073676992500
hash_elements_max = 18446744073709551615
hash_collisions = 12375174
hash_chains = 18446744073698514699
hash_chain_max = 9
p = 103252 MB
c = 103252 MB
c_min = 64 MB
c_max = 129997 MB
size = 102059 MB
buf_size = 481 MB
data_size = 100652 MB
other_size = 924 MB
l2_hits = 0
l2_misses = 28860232
l2_feeds = 0
l2_rw_clash = 0
l2_read_bytes = 0 MB
l2_write_bytes = 0 MB
l2_writes_sent = 0
l2_writes_done = 0
l2_writes_error = 0
l2_writes_hdr_miss = 0
l2_evict_lock_retry = 0
l2_evict_reading = 0
l2_abort_lowmem = 0
l2_cksum_bad = 0
l2_io_error = 0
l2_hdr_size = 0 MB
memory_throttle_count = 0
meta_used = 1406 MB
meta_max = 1406 MB
meta_limit = 0 MB
arc_no_grow = 1
arc_tempreserve = 0 MB
root@server12:~#
The write-performance is really really slow:
read/write within this pool:
root@server12:/datenhalde/s12test/Bild-DB/Testaktion# /usr/gnu/bin/dd if=Test.tif of=Test2.tif
1885030+1 records in
1885030+1 records out
965135496 bytes (965 MB) copied, 145,923 s, 6,6 MB/s
read from this pool and write to the root-pool:
root@server12:/datenhalde/s12test/Bild-DB/Testaktion# /usr/gnu/bin/dd if=Test.tif of=/tmp/Test2.tif
1885030+1 records in
1885030+1 records out
965135496 bytes (965 MB) copied, 9,51183 s, 101 MB/s
root@server12:/datenhalde/s12test/Bild-DB/Testaktion# /usr/gnu/bin/dd if=FS2013_Fashionation_Beach_06.tif of=FS2013_Test.tif
I just do not get this. Why is it that slow? Am i missing any tunable parameters? From the above figures the ddt should use 5354008*152=776MB in RAM. That should fit easily.
Sorry for the longish post, but i really need some help here, because the real data with much higher dedup ratio is still to be copied to that pool.
Compression is no real alternative, because most of the data will be compressed images and i don't expect to see great compression ratios.
TIA and kind regards,
Tom
Edited by: vigtom on 16.04.2013 07:51

Hi Cindy,
thanks for answering :)
Isn't the tunable parameter "arc_meta_limit" obsolete in Solaris 11?
Before Solaris 11 you could tune arc_meta_limit by setting something reasonable in /etc/system with "set zfs:zfs_arc_meta_limit=...." which - at boot - is copied into arc_c_max overriding the default setting.
On this Solaris 11.1 c_max is already maxed out to "kstat -p zfs:0:arcstats:c_max -> zfs:0:arcstats:c_max 136312127488" without any tunig. This is also reflected by the parameter "meta_limit = 0". Am i missing something here?
When looking at the output of "echo "::arc" | mdb -k" i see the values of "meta_used", "meta_max" and "meta_limit". I understand these as "memory used for metadata right now", "max memory used for metadata in the past" and "theoretical limit of memory used for metadata" with an value of "0" as "unlimited". Right?
What exactly is "arc_no_grow = 1" saying here?
Sorry for maybe asking some silly questions. This is all a bit frustrating ;)
When disabling dedup on the pool write performance is increasing almost instantly. I did not test it long enough to get real figures. I'll probably do this (eventually even with Solaris 10) tomorrow.
Would Oracle be willing to help me out under a support plan when running Solaris 11.1 on a machine which is certified for Solars 10 only?
Thanks again and kind regards,
Tom

Similar Messages

  • Proliant DL380p Gen8 drivers

    I'm trying to download the network adapter drivers for the proliant 380p gen8 from this link:
    http://www8.hp.com/us/en/support-search.html?tab=1#/qryterm=DL380p&searchtype=s-002
    no matter what I try it say that the site is not available.  Please advise.  Thank you
    This question was solved.
    View Solution.

    Hi @Buckeyecu ,
    I understand you are experiencing issues when trying to access the link you have provided in your post. Please retry this link. 
    http://www8.hp.com/us/en/support-search.html?tab=1#/qryterm=DL380p&searchtype=s-002
    Regards,
    George
    I work for HP

  • Problem during installation in HP Proliant DL380P Gen8

    Dear All, 
    We have HP Proliant DL 380P Gen8 machine, i am facing some issue during installation oracle linux ver 5, please help me regaring the following points;
    1. Is this machine compatible to install Oracle Linum Software?
    2. If yes, then we have faced attached error; please guide me to resolve it;
    Regards,
    Akhtar

    Hi:
    You may also want to post your question on the HP Business Support Forum -- DL Servers section.
    http://h30499.www3.hp.com/t5/ProLiant-Servers-ML-DL-SL/bd-p/itrc-264#.VQGkMHl0y9I

  • HP Proliant DL560 (gen8)

    Please advise whom to contact if we need to purchase "HP Proliant DL560 (gen8) Server with the following specifications:
    - HP PROLIANT DL560 (GEN8) INTEL XEON E5-4650 (2.7GHZ/8-CORE/20MB/130W) PROCESSOR 20MB(1X20MB) LEVEL3 CACHE, 32GB (4X4GB) PC3L 10600R (DDR3-1333) Registered DIMMS, HP SMART ARRAY P420I/1GB with FBWC(RAID 0/1/1+0/5/5+0/6/6+0), Hard Disk 5X 300GB SAS 10K, Redundant Supply
    Qty.: 01
    This is required in Saudi Arabia, the local number given for contact here is not being answered.
    Please advise.
    Thank You.

    Hello mikigab,
    Welcome to the HP Forums, I hope you enjoy your experience! To help you get the most out of the HP Forums I would like to direct your attention to the HP Forums Guide First Time Here? Learn How to Post and More.
    I understand you have questions about graphics cards and the HP ProLiant DL560 Server. I am sorry, but to get your issue more exposure, I would suggest posting it in the commercial forums, since this is a commercial product. You can do this at HP Enterprise Business Community - ProLiant.
    I hope this helps. Thank you for posting on the HP Forums. Have a great day!
    Please click the "Thumbs Up" on the bottom right of this post to say thank you if you appreciate the support I provide!
    Also be sure to mark my post as “Accept as Solution" if you feel my post solved your issue, it will help others who face the same challenge find the same solution.
    Dunidar
    I work on behalf of HP
    Find out a bit more about me by checking out my profile!
    "Customers don’t expect you to be perfect. They do expect you to fix things when they go wrong." ~ Donald Porter

  • ProLiant ML310e Gen8 v2

    Hello,
    just unboxed a brand new HP ProLiant ML310e Gen8 v2 Server (8GB ram). I can reach the iLO webinterface but the Server hangs on the early windows 2012 system initialization on 90 % . I try with RAID1 and RAID 0, I tested the raid with the HP tools and there is no errors, sometime the windows server start fine, I check the event view and no errors found. I reinstalled the server 4 time in diferent ways with the same issue, I used the latest drivers for the raid. Any idea? (HP support phone is always busy)
    This question was solved.
    View Solution.

    I worked with the HP support, we found that the issue was a MS update, we applyed the fix and solve the issue
    more info:
    http://www.tsf.net.au/kb2967012-a-windows-8-1-base​d-or-windows-server-2012-r2-based-computer-does-no​...

  • HP ML310e ProLiant v2 Gen8 + Universal USB audio device

    The server HP ML310e ProLiant v2 Gen8 connected audio device USB - works program alarm objects embedded audio on the motherboard is not. Universal Audio Device USB - from USB headphones, earphones cut and rewire on desktop speakers.
    Whenever a "shutdown" or "reboot" server znachёk "Speakers" in the taskbar headlines with a red cross and the sound does not work, although the "Device Manager" in "Sound and audio devices ..." - "Audio Device USB" - "Device status" writes that "the device is working properly."
    To the sound of nada earned each time to enter the "Device Manager" click "Remove Device" then "Scan for hardware changes".
    But the simple duty personnel to do it very hard, they call "programmer" at 3am ...
    How to solve this problem, but to buy a PCI sound card? and maybe she will work?

    Hi:
    You may also want to post your question on the HP Business Support Forum -- ML Servers section.
    http://h30499.www3.hp.com/t5/ProLiant-Servers-ML-DL-SL/bd-p/itrc-264#.VNTXTHk5C9I

  • Report  performance with Hierarchies

    Hi
    How to improve query performance with hierarchies. We have to do lot of navigation's in the query and the volume of data size very big.
    Thanks
    P G

    HI,
    chk this:
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1955ba90-0201-0010-d3aa-8b2a4ef6bbb2
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ce7fb368-0601-0010-64ba-fadc985a1f94
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/4c0ab590-0201-0010-bd9a-8332d8b4f09c
    Query Performance – Is "Aggregates" the way out for me?
    /people/vikash.agrawal/blog/2006/04/17/query-performance-150-is-aggregates-the-way-out-for-me
    ° the OLAP cache is architected to store query result sets and to give all users access to those result sets.
    If a user executes a query, the result set for that query’s request can be stored in the OLAP cache; if that same query (or a derivative) is then executed by another user, the subsequent query request can be filled by accessing the result set already stored in the OLAP cache.
    In this way, a query request filled from the OLAP cache is significantly faster than queries that receive their result set from database access
    ° The indexes that are created in the fact table for each dimension allow you to easily find and select the data
    see http://help.sap.com/saphelp_nw04/helpdata/en/80/1a6473e07211d2acb80000e829fbfe/content.htm
    ° when you load data into the InfoCube, each request has its own request ID, which is included in the fact table in the packet dimension.
    This (besides giving the possibility to manage/delete single request) increases the volume of data, and reduces performance in reporting, as the system has to aggregate with the request ID every time you execute a query. Using compressing, you can eliminate these disadvantages, and bring data from different requests together into one single request (request ID 0).
    This function is critical, as the compressed data can no longer be deleted from the InfoCube using its request IDs and, logically, you must be absolutely certain that the data loaded into the InfoCube is correct.
    see http://help.sap.com/saphelp_nw04/helpdata/en/ca/aa6437e7a4080ee10000009b38f842/content.htm
    ° by using partitioning you can split up the whole dataset for an InfoCube into several, smaller, physically independent and redundancy-free units. Thanks to this separation, performance is increased when reporting, or also when deleting data from the InfoCube.
    see http://help.sap.com/saphelp_nw04/helpdata/en/33/dc2038aa3bcd23e10000009b38f8cf/content.htm
    Hope it helps!
    tHAK YOU,
    dst

  • Are there Issues with poor performance with Maverick OS,

    Are there issues with slow and reduced performance with Mavericks OS

    check this
    http://apple.stackexchange.com/questions/126081/10-9-2-slows-down-processes
    or
    this:
    https://discussions.apple.com/message/25341556#25341556
    I am doing a lot of analyses with 10.9.2 on a late 2013 MBP, and these analyses generally last hours or days. I observed that Maverick is slowing them down considerably for some reasons after few hours of computations, making it impossible for me to work with this computer...

  • Performance with the new Mac Pros?

    I sold my old Mac Pro (first generation) a few months ago in anticipation of the new line-up. In the meantime, I purchased a i7 iMac and 12GB of RAM. This machine is faster than my old Mac for most Aperture operations (except disk-intensive stuff that I only do occasionally).
    I am ready to purchase a "real" Mac, but I'm hesitating because the improvements just don't seem that great. I have two questions:
    1. Has anyone evaluated qualitative performance with the new ATI 5870 or 5770? Long ago, Aperture seemed pretty much GPU-constrained. I'm confused about whether that's the case anymore.
    2. Has anyone evaluated any of the new Mac Pro chips for general day-to-day use? I'm interested in processing through my images as quickly as possible, so the actual latency to demosaic and render from the raw originals (Canon 1-series) is the most important metric. The second thing is having reasonable performance for multiple brushed-in effect bricks.
    I'm mostly curious if anyone has any experience to point to whether it's worth it -- disregarding the other advantages like expandability and nicer (matte) displays.
    Thanks.
    Ben

    Thanks for writing. Please don't mind if I pick apart your statements.
    "For an extra $200 the 5870 is a no brainer." I agree on a pure cost basis that it's not a hard decision. But I have a very quiet environment, and I understand this card can make a lot of noise. To pay money, end up with a louder machine, and on top of that realize no significant benefit would be a minor disaster.
    So, the more interesting question is: has anyone actually used the 5870 and can compare it to previous cards? A 16-bit 60 megapixel image won't require even .5GB of VRAM if fully tiled into it, for example, so I have no ability, a priori, to prove to myself that it will matter. I guess I'm really hoping for real-world data. Perhaps you speak from this experience, Matthew? (I can't tell.)
    Background work and exporting are helpful, but not as critical for my primary daily use. I know the CPU is also used for demosaicing or at least some subset of the render pipeline, because I have two computers that demonstrate vastly different render-from-raw response times with the same graphics card. Indeed, it is this lag that would be the most valuable of all for me to reduce. I want to be able to flip through a large shoot and see each image at 100% as instantaneously as possible. On my 2.8 i7 that process takes about 1 second on average (when Aperture doesn't get confused and mysteriously stop rendering 100% images).
    Ben

  • Performance with Boot Camp/Gaming?

    Hi,
    I just acquired a MBP/2GHz IntelCD/2GB RAM/100GB/Superdrive, with Applecare. Can anyone comment about the performance with
    Boot Camp -- running Windows XP SP2, and what the gaming graphics are like?
    Appreciate it, thanks...
    J.
    Powerbook G4 [15" Titanium - DVI] Mac OS X (10.4.8) 667MHz; 1GB RAM; 80GB

    Well, I didn't forget to mention what I did not know yet.... So that's not exactly correct..
    As per Apple's support page, http://support.apple.com/specs/macbookpro/MacBook_Pro.html
    My new computer does have 256MB of video memory...

  • Performance with external display

    Hello,
    when I'm connecting my 19'' TFT to the MacBook the performance (with the same applications runnig) is realy bad. It tooks longer to switch between apps and if I don't use an app for some time, it can took up to 30 sec to "reactivate" the app.
    Because the HD is working when I switch apps, it looks like the OS is swapping. My question: Would it help to upgrade the MacBook to 2GB ram? AFAIC the intel card uses shared memory.
    Thanks for your help
    Till
    MacBook 1.83 GHz/1GB   Mac OS X (10.4.8)  

    How much RAM do you have? Remember that the MB does not have dedicated VRAM like some computers do and that it uses the system RAM to drive the graphics chipset.
    I use my MB with the mini-DVI to DVI adapter to drive a 20" widescreen monitor without any of the problems that you describe, but I have 2GB of RAM. If you only have the stock 512MB of RAM, that may be part of what you are seeing.

  • Performance with LinkedList in java

    Hello All,
    Please suggest me any solution to further improve the performance with List.
    The problem description is as follows:
    I have a huge number of objects, lets say 10,000 , and need to store the objects in such a collection so that if i want to store an object at some particular index i , I get the best performance.
    I suppose as I need indexed based access, using List makes the best sense as Lists are ordered.
    Using LinkedList over ArrayList gives the better performance in the aforementioned context.
    Is there are way I can further improve the performance of LinkedList while solving this particular problem
    OR
    Is there any other index based collection using that i get better performance than LinkedList?
    Thanks in advance

    The trouble with a LinkedList as implemented in the Java libraries is that if you want to insert at index 100, it has no option but to step through the first 100 links of the list to find the insert point. Likewise is you retrieve by index. The strength of the linked list approach is lost if you work by index and the List interface gives no other way to insert in the middle of the list. The natural interface for a linked list would include an extended Iterator with methods for insert and replace. Of course LinkedLists are fine when you insert first or last.
    My guess would be that if your habitual insertion point was half way or more through the list then ArrayList would serve you better especially if you leave ample room for growth. Moving array elements up is probably not much more expensive, per element, than walking the linked list. Maybe 150% or thereabouts.
    Much depends on how you retrieve, and how volatile the list is. Certainly if you are a read-mostly situation and cannot use an iterator then a LinkedList won't suit.

  • Performance with dates in the where clause

    Performance with dates in the where clause
    CREATE TABLE TEST_DATA
    FNUMBER NUMBER,
    FSTRING VARCHAR2(4000 BYTE),
    FDATE DATE
    create index t_indx on test_data(fdata);
    query 1: select count(*) from TEST_DATA where trunc(fdate) = trunc(sysdate);
    query 2: select count(*) from TEST_DATA where fdate between trunc(sysdate) and trunc(SYSDATE) + .99999;
    query 3: select count(*) from TEST_DATA where fdate between to_date('21-APR-10', 'dd-MON-yy') and to_date('21-APR-10 23:59:59', 'DD-MON-YY hh24:mi:ss');
    My questions:
    1) Why isn't the index t_indx used in Execution plan 1?
    2) From the execution plan, I see that query 2 & 3 is better than query 1. I do not see any difference between execution plan 2 & 3. Which one is better?
    3) I read somewhere - "Always check the Access Predicates and Filter Predicates of Explain Plan carefully to determine which columns are contributing to a Range Scan and which columns are merely filtering the returned rows. Be sceptical if the same clause is shown in both."
    Is that true for Execution plan 2 & 3?
    3) Could some one explain what the filter & access predicate mean here?
    Thanks in advance.
    Execution Plan 1:
    SQL> select count(*) from TEST_DATA where trunc(fdate) = trunc(sysdate);
    COUNT(*)
    283
    Execution Plan
    Plan hash value: 1486387033
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 9 | 517 (20)| 00:00:07 |
    | 1 | SORT AGGREGATE | | 1 | 9 | | |
    |* 2 | TABLE ACCESS FULL| TEST_DATA | 341 | 3069 | 517 (20)| 00:00:07 |
    Predicate Information (identified by operation id):
    2 - filter(TRUNC(INTERNAL_FUNCTION("FDATE"))=TRUNC(SYSDATE@!))
    Note
    - dynamic sampling used for this statement
    Statistics
    4 recursive calls
    0 db block gets
    1610 consistent gets
    0 physical reads
    0 redo size
    412 bytes sent via SQL*Net to client
    380 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1 rows processed
    Execution Plan 2:
    SQL> select count(*) from TEST_DATA where fdate between trunc(sysdate) and trunc(SYSDATE) + .99999;
    COUNT(*)
    283
    Execution Plan
    Plan hash value: 1687886199
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 9 | 3 (0)| 00:00:01 |
    | 1 | SORT AGGREGATE | | 1 | 9 | | |
    |* 2 | FILTER | | | | | |
    |* 3 | INDEX RANGE SCAN| T_INDX | 283 | 2547 | 3 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    2 - filter(TRUNC(SYSDATE@!)<=TRUNC(SYSDATE@!)+.9999884259259259259259
    259259259259259259)
    3 - access("FDATE">=TRUNC(SYSDATE@!) AND
    "FDATE"<=TRUNC(SYSDATE@!)+.999988425925925925925925925925925925925
    9)
    Note
    - dynamic sampling used for this statement
    Statistics
    7 recursive calls
    0 db block gets
    76 consistent gets
    0 physical reads
    0 redo size
    412 bytes sent via SQL*Net to client
    380 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1 rows
    Execution Plan 3:
    SQL> select count(*) from TEST_DATA where fdate between to_date('21-APR-10', 'dd-MON-yy') and to_dat
    e('21-APR-10 23:59:59', 'DD-MON-YY hh24:mi:ss');
    COUNT(*)
    283
    Execution Plan
    Plan hash value: 1687886199
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT | | 1 | 9 | 3 (0)| 00:00:01 |
    | 1 | SORT AGGREGATE | | 1 | 9 | | |
    |* 2 | FILTER | | | | | |
    |* 3 | INDEX RANGE SCAN| T_INDX | 283 | 2547 | 3 (0)| 00:00:01 |
    Predicate Information (identified by operation id):
    2 - filter(TO_DATE('21-APR-10','dd-MON-yy')<=TO_DATE('21-APR-10
    23:59:59','DD-MON-YY hh24:mi:ss'))
    3 - access("FDATE">=TO_DATE('21-APR-10','dd-MON-yy') AND
    "FDATE"<=TO_DATE('21-APR-10 23:59:59','DD-MON-YY hh24:mi:ss'))
    Note
    - dynamic sampling used for this statement
    Statistics
    7 recursive calls
    0 db block gets
    76 consistent gets
    0 physical reads
    0 redo size
    412 bytes sent via SQL*Net to client
    380 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    1 rows processed

    Hi,
    user10541890 wrote:
    Performance with dates in the where clause
    CREATE TABLE TEST_DATA
    FNUMBER NUMBER,
    FSTRING VARCHAR2(4000 BYTE),
    FDATE DATE
    create index t_indx on test_data(fdata);Did you mean fdat<b>e</b> (ending in e)?
    Be careful; post the code you're actually running.
    query 1: select count(*) from TEST_DATA where trunc(fdate) = trunc(sysdate);
    query 2: select count(*) from TEST_DATA where fdate between trunc(sysdate) and trunc(SYSDATE) + .99999;
    query 3: select count(*) from TEST_DATA where fdate between to_date('21-APR-10', 'dd-MON-yy') and to_date('21-APR-10 23:59:59', 'DD-MON-YY hh24:mi:ss');
    My questions:
    1) Why isn't the index t_indx used in Execution plan 1?To use an index, the indexed column must stand alone as one of the operands. If you had a function-based index on TRUNC (fdate), then it might be used in Query 1, because the left operand of = is TRUNC (fdate).
    2) From the execution plan, I see that query 2 & 3 is better than query 1. I do not see any difference between execution plan 2 & 3. Which one is better?That depends on what you mean by "better".
    If "better" means faster, you've already shown that one is about as good as the other.
    Queries 2 and 3 are doing different things. Assuming the table stays the same, Query 2 may give different results every day, but the results of Query 3 will never change.
    For clarity, I prefer:
    WHERE     fdate >= TRUNC (SYSDATE)
    AND     fdate <  TRUNC (SYSDATE) + 1(or replace SYSDATE with a TO_DATE expression, depending on the requirements).
    3) I read somewhere - "Always check the Access Predicates and Filter Predicates of Explain Plan carefully to determine which columns are contributing to a Range Scan and which columns are merely filtering the returned rows. Be sceptical if the same clause is shown in both."
    Is that true for Execution plan 2 & 3?
    3) Could some one explain what the filter & access predicate mean here?Sorry, I can't.

  • Can i partition my HD now for fcpx since i didn't do it when i downloaded it. my performance with fcpx is terrible terrible performance

    When i downloaded FCPX i was told I really didn't need to partition my hard drive. But my performance with fcpx is terrible from rendering constantly to waiting for a minute after you click for it to do it. I turn off the rendring, I transcoded the media to pro res went to file and hit transcode. but still i mean it crashes, buggy, suggy and it takes hours to do anything. I have my media on a external drive but i did screw up and put some media on internal drive. iMac OSX 10.7.2  3.06 GHZ Core 2 Duo. EX HD LaCie on a 800 firewire.

    Partitioning the internal drive is - imho! - completely useless for UNIX-systems such as MacOS, due to excessive usage of (hidden!) temp-files.
    And, especially a too small int. HDD partition for the system will brake any app 'til a full halt.
    plus, 10.7. has some issues with managing Timemachine backups when it comes to large files as video - again, if the intHDD/OS drive is too small, it gets iffy.
    at last >30GBs free, only codecs and material as intended by Cupertino = no probs here (on a much smaller/older set-up).

  • Slow Performance with Business Rules

    Hello,
    Has anyone ever had slow performance with business rules? For example, I attached a calc script to a form and it ran for 20 seconds. I made an exact replica of the calc script in a business rules and it took 30 seconds to run. Also, when creating / modifying business rules in EAS it takes a long time to open or save or attach security - any ideas on things to improve this performance?
    Thanks!

    If you are having issues with performance of assigning access then I am sure there was patch available, it was either a HSS patch or planning patch.
    Cheers
    John
    http://john-goodwin.blogspot.com/

Maybe you are looking for

  • Unable to log into Presence after AD migration

    Presence 8.0.2.98000-5. CUPC clients receive 'Invalid User ID or Password' when attempting to log in. System was working fine prior to migration from Windows 2003 AD to Windows 2008 AD which necessitated a change to the LDAP Host Configuration IP Add

  • Running Safari 6.0.2 - Mac OS 10.7.5

    My MBP will not update Adobe flash player - What is the story here? I am not particularly adept at stuff like this.  I need a step by step list or is there something to buy to fix this for me. Very frustrating. Thanks for any help.

  • Directory Tree.

    What I an doing is trying to manage the files, for a small project, and I will be taking care of the documents versions. I was wondering if it is possible to to create a the whole Directory tree, where the names of the subfiles can also be displayed.

  • Commitment item for component check in MIGO

    Hi All, FM implemented. While doing Gi for the prod order and while checking for the component item ok. I am facing the error No committment item entered in item 00000. Please help how to do Gi.

  • I need instructions on how to RAID0 my drives - its driving me insane!

    Hey guys, Ive been trying to RAID0 two 80Gb Hitachi's for a long time now with no luck on my 975X.  So basically all Im asking for is exact word for word instructions on how to do it.  Basically what ive done is downloadthe intel drivers, put them on