DB Access Speed improvement

When try accessing and acquiring huge data and speed is too slow, what should I consider in order to improve this poor performance.
There can be a server side improvement and client sidefor example ADO.NET code).
I want to know everything I should consider.

Hi dy0803,
For a SELECT statement in SQL Server 2012, to improve speed, you may reference the below suggestion.
Create proper
INDEXs on the columns that after the WHERE,GROUP BY,JOIN ON.
Using the
Sargable query to take the advantage of INDEXs
Try different approaches and compare the
execution plans to select a most efficient statement.
If you have any question, feel free to let me know.
Eric Zhang
TechNet Community Support

Similar Messages

  • How to improve sql perfomance/access speed by altering session parameters

    Dear friends
    how to improve sql perfomance/access speed by altering the session parameters? without altering indexes & sql expression
    regrads
    Edited by: sak on Mar 14, 2011 2:10 PM
    Edited by: sak on Mar 14, 2011 2:43 PM

    One can try:
    http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:3696883368520
    http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:5180609822543
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14211/memory.htm#sthref497
    But make sure you understand the caveats you can run into!
    It would be better to post the outcome of select * from v$version; first.
    Also and execution plan would be nice to see.
    See:
    When your query takes too long ...
    HOW TO: Post a SQL statement tuning request - template posting

  • Massive Disk Speed Improvement Plan

    I am moving forward with a disk storage speed improvement plan using my Dell Precision T5400 workstation as the test bed.
    Specifically, my goal is to create a super fast 2 TB drive C: from four OCZ Vertex 3 480GB SATA3 SSD drives in RAID 0 configuration.  This will replace an already fast RAID 0 array made from two Western Digital 1TB RE4 drives.
    So far I have ordered two of these fast SSD drives, along with what is touted to be a very good value in high performance SATA3 RAID controllers, a Highpoint 2420SGL.  I'll get started with this combination and get to know it first as a data drive before trying to make it bootable.
    Getting any kind of hard information online about putting SSDs into RAID is a bit like pulling teeth, so I'm not 100% confident that these parts will work perfectly together, but I think the choice of SSD drives is the right one.  I had briefly considered a PCIe RevoDrive SSD card made by OCZ, but was just too esoteric...  I'm actually getting double the storage this way for the same price, I can swap to a different RAID controller if need be, and these drives can easily be ported to any new workstation I may get in the future.
    Notably, some early concerns with using SSD in RAID configurations (and things like TRIM commands) have already been alleviated, as the drives are now quite intelligent in their internal "garbage collection" processes.  I've verified this with the engineers at OCZ.  They have said that with these modern SSD drives you really don't have to worry about them being special - just use them as you would a normal drive.
    Once I get the first two SSDs set up in RAID 0 I'll specifically do some comparisons with saving large files and also using the array as the Photoshop scratch drive, vs. the spinning 1 TB drive I have in that role now.
    Assuming all goes well, I'll then add the additional two SSDs to complete the four drive array.  After a quick test of that, I'll see if I can restore a Windows System Image backup made from my 2 TB C: (spinning drive) array, which (if it works) will let me hit the ground running using the same exact Windows setup, just faster.
    My current C: drive, made from two Western Digital 1 TB RE4 drives, delivers about 210 MB/sec throughput with very large files, with 400 MB/sec bursts with small files (these drives have big caches).  Where they fall down dismally (by comparison to SSD) is operations involving seeking...  The PassMark advanced "Workstation" benchmark generates random small accesses such as what you might see during real work (and I can hear the drives seeking like crazy) results in a meager 4 MB/sec result.
    My current D: drive, a single Hitachi 1 TB spinning drive, clocks in at about 100 MB/sec for large reads/writes.
    The SSD array should push the throughput up at least 5x as compared to my current drive C: array, to over 1 GB/sec, but the biggest gain should be with random small accesses (no seek time in an SSD), where I'm hoping to see at leasdt a 25x improvement to over 100 MB/second.  That last part is what's going to speed things up from an every day usage perspective.
    I imagine that when the dust settles on this build-up, I'll end up pointing virtually everything at drive C:, including the Photoshop scratch file, since it will have such a massively fast access capability.  It will be interesting to experiment.  I suppose I'll have to come up with some gargantuan panoramas to stitch in order to force Photoshop to go heavily to the scratch drive for testing.
    I'll let you all know how it works out, and I'll be sure and do before/after comparisons of real use scenarios (big files in Photoshop, and various other things).  Perhaps fully my "real world" results can help others looking to get more Photoshop performance out of their systems understand what SSD can and can't do for them.
    I welcome your thoughts and experiences.
    -Noel

    Not sure who might be following this thread, but I have executed the final phase of this plan, restoring a system backup from my spinning drive array onto the new 4 drive SSD array.
    All went off without a hitch, I have my same system configuration including all apps and everything just as it was, except everything is now MUCH faster.
    The 4 drive array achieves a staggering 1.74 gigabytes/second sustained throughput rate.
    Windows 7 WEI score is 7.9 for the Primary hard disk category.
    Windows boots up quickly, everything starts immediately, nothing bogs the system down, and just overall everything feels very fluid and snappy.  And there is no seeking noise from the drives.
    Regarding what this has done for Photoshop...  I've only tested on Photoshop CS6 beta so far today, but everything is incrementally improved.  Startup time is faster, things seem more smooth and fluid while editing overall, and a benchmark I created using an action to run a lot of image adjustment operations on a big, multi-layer image ran this long to completion:
    When the file is opened from (and the Photoshop scratch file is on) a single spinning disk: 
    4 minutes 26 seconds (266 seconds)
    When the file is opened from (and the scratch file was is on) a fast array of spinning drives: 
    3 minutes 45 seconds (225 seconds)
    When the entire system is run from the SSD array: 
    2 minutes 31 seconds (151 seconds)
    During the action, because so many steps are performed on the big file, Photoshop writes a 30+ gigabyte scratch file on the scratch drive.
    Summary
    Clearly the very fast disk access markedly improves Photoshop's speed when it uses scratch space. 
    Plus copying big image files around is virtually instantaneous. 
    I don't use Bridge myself, but I have noticed that all the image thumbnails (via FastPictureViewer Codec Pack) just show up immediately in Explorer windows and Photoshop File Open/Save dialogs.  We can only assume this kind of drive speed would really make Bridge blaze through its operations as well.
    Following my footsteps would be expensive, but it can really work.
    -Noel

  • DRI (declarative referential integrity) and speed improvements.

    EDITED: See my second post--in my testing, the relevant consideration is whether the parent table has a compound primary key or a single primary key.  If the parent has a simple primary key, and there is a trusted (checked) DRI relation
    with the child, and a query requests only records from the child on an inner join with the parent, then sql server (correctly) skips performing the join (shown in the execution plan).  However, if the parent has a compound primary key, then sql server
    performs a useless join between parent and child.   tested on sql 2008 r2 and denali.  If anyone can get sql server NOT to perform the join with compound primary keys on the parent, let me know.
    ORIGINAL POST: I'm not seeing the join behavior in the execution plan given in the link provided (namely that the optimizer does not bother performing a join to the parent tbl when a query needs information from the child side only AND
    trusted DRI exists between the tables AND the columns are defined as not null).  The foreign key relation "is trusted" by Sql server ("is not trusted" is false), but the plan always picks both tables for the join although only one is needed. 
    If anyone has comments on whether declarative ref integrity does produce speed improvements on certain joins, please post.  thanks.
    http://dinesql.blogspot.com/2011/04/does-referential-integrity-improve.html

    I'm running sql denali ctp3 x64 and sql 2008 r2 x64, on windows 7 sp1. I've tested it on dozens of tables, and I defy anyone to provide a counter-example (you can create ANY parent table with two ints as a composite primary key, and then a child table using
    that compound as a foreign key, and create a trusted dri link between them and use the above queries I posted)--any table with a compound foreign key relation as the basis for the DRI apparently does not benefit from referential integrity between those tables
    (in terms of performance). Or to be more precise, the execution plan reveals that sql server performs a costly and unnecessary join in these cases, but not when the trusted DRI relation between them is a single primary key. If anyone has seen a different result,
    please let me know, since it does influence my design decisions.
    fwiw, a similar behavior is true of sql server's date correlation optimization: it doesn't work if the tables are joined by a composite key, only if they are a joined by a single column:
    "There must be a single-column
    foreign key relationship between the tables. "
    So I speculate, knowing absolutely nothing, that there must be something deep in the bowels of the engine that doesn't optimize compound key relations as well as single column ones.
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    CREATE TABLE [dbo].[parent](
    [pId1] [int] NOT NULL,
    [pId2] [int] NOT NULL,
    CONSTRAINT [PK_parent] PRIMARY KEY CLUSTERED
    [pId1] ASC,
    [pId2] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    CREATE TABLE [dbo].[Children](
    [cId] [int] IDENTITY(1,1) NOT NULL,
    [pid1] [int] NOT NULL,
    [pid2] [int] NOT NULL,
    CONSTRAINT [PK_Children] PRIMARY KEY CLUSTERED
    [cId] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    ALTER TABLE [dbo].[Children] WITH CHECK ADD CONSTRAINT [FK_Children_TO_parent] FOREIGN KEY([pid1], [pid2])
    REFERENCES [dbo].[parent] ([pId1], [pId2])
    ON UPDATE CASCADE
    ON DELETE CASCADE
    GO
    /* the dri MUST be trusted to work, but it doesn't work anyway*/
    ALTER TABLE [dbo].[Children] CHECK CONSTRAINT [FK_Children_TO_parent]
    GO
    /* Enter data in parent and children */
    select c.cId FROM dbo.Children c INNER JOIN Parent p
    ON p.pId1 = c.pId1 AND p.pId2 = c.pId2;
    /* Execution plan will be blind to the trusted DRI--performs the join!*/

  • Have speed improvements been made?

    I just want to say that my posts on the forums today have appeared in the topic list nearly instantaneously — which wasn't the case just a few days ago. And if the forum administrators have made speed improvements recently, but are disappointed no-one has noticed, then perhaps this will bring a modicum of satisfaction.
    ...I wonder if someone "in the know" (Eric?) can indicate whether this is a permanent improvement. If it's merely due to draining the _internet tubes_, it may not be...

    Thanks for your reply, but I'm not sure it explains the particular slow-down I'm seeing...
    Displaying forums is as fast as ever, as is the system's acceptance of new posts/replies. The problem is, as I said, that +"posts take a long time to appear"+. ...I list the main forum page and expect to see my just-accepted post at the top of the list, but it doesn't appear for between one and a few minutes after my post has been accepted. Yes, there's a warning to expect a delay, but I'm pointing out that these forums go through periods of near instantaneous appearance of new posts (as was the case a week ago) to a now sluggish appearance of new posts.
    The idea that there is a geographic aspect to the problem is perhaps nullified by the comment by MGW (in New Hampshire?) above:
    "Actually, the speedup has been noticeable for the past couple of days, having complained bitterly about the clog..."
    Also during "clogged" periods, duplicate posts from all over the world tend to appear as members mistakenly think their post, although accepted by the system, didn't "take" and re-enter it — because it doesn't show up in the forum's main list for a couple of minutes or so.
    ...We seem to regularly go through these alternating multi-week periods of members reporting speed and sluggishness but, so far, with no acknowledgement or explanation from the hosts.
    By the way, this post itself took a minute to appear in the main forum list — a week ago, it would have appeared almost instantaneously.
    Message was edited by: Alancito

  • Speed improvements in JDeveloper 3.0

    Will the overall response time/speed improve in JDeveloper 3.0?
    Thanks
    Mike
    null

    Hi
    JDeveloper 3.0 is in beta right now, and it has improvements in
    response time/speed.
    For example the deployment wizard is way faster.
    regards
    raghu
    Michael Maculsay (guest) wrote:
    : Will the overall response time/speed improve in JDeveloper 3.0?
    : Thanks
    : Mike
    null

  • How to increase internet access speed of OS X 10.5.8 whilst using telstra prepaid wirless broadband internet stick in Darwin, Australia

    Is it possible  to increase internet access speed of Mackbook Pro OS X 10.5.8 whilst using telstra prepaid wirless broadband internet stick in Darwin, Australia?
    How can i do this?

    Network Utility (in the Utilities folder) can tell you, in the Info pane, what speed you are actually connecting at.
    Just set the interface to the one you are using (which may show up as en2 or something). Link Staus and speed are shown along with error counts.
    If the data rate is very different from what you expected, you may have it manually set to a lower rate.
    YouTube files are enormous -- that may be the best it can do.

  • Can someone explain to me "Recommended WAN Access Speed (with services)?

    Looking at this spec sheet:
    http://www.cisco.com/c/dam/en/us/products/collateral/routers/3900-series-integrated-services-routers-isr/Routing_Poster.pdf
    Why offer GE WAN ports on a 1921 if the recommended WAN access speed is only 15Mbps?  Can someone provide context?  I think I'm misunderstanding the specs.
    Thanks!

    Photo sync is only one way, computer to iPhone.
    To get pictures from the iPhone to the computer, copy them to the computer as you would with any other device.
    Pictures saved from emails or instant messages (if done on the device) are in the camera roll.
    Pictures previously synced to the device from a computer are not available to be copied off the device.  They should already be on a computer.

  • OLEDB, data accessing speed

    I wrote a program in VB6 which involves frequent database accessing. The database I used previously was MS-Access 2000 and the performance of the application was very good. It takes about 2 seconds to finish a run.
    Now I changed my database to Oracle 8.1.17. The application accesses the Oracle database through Microsoft ADO and OLEDB provided by Oracle. However, the performance of the application is greatly reduced. It takes nearly one minute to complete a run.
    I think there should be no problem in oracle settings or database indexes, and the problem might be related to either the ADO or OLEDB. I wonder if Oracle can provide the same accessing speed as MS Access to VB applicatoins. I would be very appreciated if anyone could share your experience.
    Ruihong

    Hi Otto, thanks for response.
    For standalone LCD, i mean that i use the SAP version of LCD but i call it directly from my filesystem outside the sapgui, this allow to save the form on my local pc in different formats (i use this only for test: this form has no reader rights, you can use it in Adobe Professional, while n Reader you can't save the data).
    The problem i've exposed above happens when the form is generated (with reader rights) from SAP.
    I know Stefan Cameron blog and i used his tutorial  to do this this, but what works for him doesn't for me!!
    Than i think this is a SAP problem. I opened a message yesterday, i'm still waiting the response.
    Luca

  • Shared drive access speed: AEBS vs Mac Mini

    Quick question - suppose I share the same external HDD from either a) connected directly to an Airport extreme (previous gen n model) via USB or b) from another networked computer (spicifically a Mac Mini - 2.0 dual core, 2GB RAM, hardwired to AEBS) using USB or FireWire. Will one configuration or the other give faster access speeds from other networked computers?
    Message was edited by: pdbennett

    pdbennett wrote:
    Will one configuration or the other give faster access speeds from other networked computers?
    in my experience, a direct firewire connection is *by far* superior.
    JGG

  • Increase access speed for users in multiple countries

    We have set up a SQL sever database that serves as the back end for an Access front end.
    We have users in Australia (NSW, Victoria and SA) and in Batam, Indonesia.
    All Australian users have acceptable access speeds, but the users in Batam, Indonesia are experiencing severe slowdowns, even on a clear 12meg duplex connection.
    Please advise how we may resolve this so our Indonesian users have the same access speed as our Australian users.

    Hi Nigel,
    It is correct that the geo-replication only offers readable secondaries (in the Premium service tiers). If your workload is mostly reads with occasional writes, you can consider having local secondaries for the read queries and perform the writes into
    the central primary DB.
    One other thing you could look into is to shard the data, if the application and dataset allows this:
    http://azure.microsoft.com/en-us/documentation/articles/sql-database-elastic-scale-get-started/
    However if you cannot split up the data and all your sites require constant write access and operate on the same dataset, I believe it comes down to running a single DB in the 'optimal' location.
    Hope that helps,
    Jan

  • On an IMac will a 4GB graphics card give a noticeable speed improvement over a 2GB card?

    On an IMac will a 4GB graphics card give a noticeable speed improvement over a 2GB card?

    In terms of Lightroom I'm pretty sure the answer would be "no".

  • Site Speed Improvements

    To All:
    I am looking for ways to improve my website loading speed. I have several albums on the site (www.lens-perspective.com) and I am wondering...
    1.) if the site would load faster if limited my albums to 10-15 pictures.
    2.) Should I optimize the pictures for the web myself instead of letting iWeb do it?
    2.) Also, instead of using the ALBUM Template page in iWeb, am I better of using a blank page and placing one picture on it for each album and then hyperlinking to each album from that page?
    3.) Lastly, there should be 2 pictures on my "About Me" page and they don't seem to load after my site is uploaded. Do you think that this is a browser issue? I host the site at GO Daddy and upload it using Fetch.
    Any other "speed" recommendations are welcome.
    Thank you,
    Randy

    1. Yes. Large albums take much longiing to load and keeping the albums under 15 images is a good idea.
    2. Probably not worth the time and effort. Your current files average about 100 KB's (good size for Web use). 15 on a page would be 1.5 MB's that needs to download.
    3. I see three images on the About Me page. One large and two smaller (lower part of the page).

  • Will URLS (Unified light speed) improves the current app. perf. as well ?

    Hi,
    We are using ABAP webdynpro application that encapsulates Interactive PDF as well. But it's performance is not good.
    Normally it takes 30-50 seconds when user opens the workitem from UWL which further calls the webdynpro application and shows the pdf to user.
    My question is if we go for EHP1 for NW that will give Unified Rendering Light Speed in Web Dynpro ABAP  then will this technology helps to improve the current webdynpro application performance or only the new applications that will be built using this.
    Please suggest me for this or tell me the other way to improve the performance of webdynpro application.
    Thanks,
    Rahul

    Hi Rahul,
    The new light speed rendering engine is the rendering framework to render the Webdynpro applications. Therefore there is nothing where by you specify that a particular application is developed using the Light speed rendering engine.
    In other words to answer your question, EHP1 will improve the performance for all the applications whether developed on EHP1 or developed prior to EHP1.
    Regards
    Rohit Chowdhary

  • Bravo on great speed improvement in LR5!

    I'm currently in the process of editing large sets of close to 300 photos each.  My workflow needs me to go through each single shot, calibrate them, and eventually make a selection of around 100 good photos per set.  I often navigate between the Library and Develop modules, and I use pretty much all tools (as necessary) when calibrating/editing the shots.  My files are D800 raw files. Note also that my library contains 13 years of photography work, with around 250,000 photos (of various sources evidently).
    I just wanted to take a few minutes off my editing process to comment on the huge improvement on speed I am experiencing in LR5 over LR4.
    In LR4, my workflow with my D800 files was very painful, with all the slugginess I would get everytime I moved from the Develop module to the Library Module, and everytime I used tools in the develop module.  There was lag virtually everywhere.  I'm on a very up to date system, with SSD drives both for the phtoos I work on, and my system disc. 
    Since LR5, I can honestly say it feels pleasant once again to edit my photos.  As where I feared opening LR4 to do my job, I now look forward again to visit my shots. 
    I read on the forum about the sharpness issue in lower res exports, and I'm sure there are a few more things to fix here and there.  But the HUGE improvement in responsiveness and speed in the application makes LR5 a real winner to me.  Speed is definitly a main issue that should always be addressed first with each update, always trying to improve on it and make the workflow as pleasant as possible so that we, as photographers, have to focus only on one thing: creativity. 
    I'm glad this issue was addressed with LR5, and my only wish would have been to see it addressed sooner in LR4.
    Back to editing I go...
    cheers!

    I'm currently in the process of editing large sets of close to 300 photos each.  My workflow needs me to go through each single shot, calibrate them, and eventually make a selection of around 100 good photos per set.  I often navigate between the Library and Develop modules, and I use pretty much all tools (as necessary) when calibrating/editing the shots.  My files are D800 raw files. Note also that my library contains 13 years of photography work, with around 250,000 photos (of various sources evidently).
    I just wanted to take a few minutes off my editing process to comment on the huge improvement on speed I am experiencing in LR5 over LR4.
    In LR4, my workflow with my D800 files was very painful, with all the slugginess I would get everytime I moved from the Develop module to the Library Module, and everytime I used tools in the develop module.  There was lag virtually everywhere.  I'm on a very up to date system, with SSD drives both for the phtoos I work on, and my system disc. 
    Since LR5, I can honestly say it feels pleasant once again to edit my photos.  As where I feared opening LR4 to do my job, I now look forward again to visit my shots. 
    I read on the forum about the sharpness issue in lower res exports, and I'm sure there are a few more things to fix here and there.  But the HUGE improvement in responsiveness and speed in the application makes LR5 a real winner to me.  Speed is definitly a main issue that should always be addressed first with each update, always trying to improve on it and make the workflow as pleasant as possible so that we, as photographers, have to focus only on one thing: creativity. 
    I'm glad this issue was addressed with LR5, and my only wish would have been to see it addressed sooner in LR4.
    Back to editing I go...
    cheers!

Maybe you are looking for