SQL server record size limitation.

Hi all,
I am using SQL server, wxp & jboss to write jsp, I am writting a CMS which need to store a lot of documents and show it on the web.
However, some of the content are too long and its longer that the max limitation of the record size of sql server. What is the most common solution for solve this problem.
As the contents are in unicode, I think save it as a text file seems difficult. Any other solution suggest to me??
Thanks a lot.
kin

Thanks, gocha
but the sql server limited the field size in 4000 (in all unicode data type) but if i want to store something which is greater than 4000 character..... so I would like to find any other solution to solve this problem.
Thanks alot.

Similar Messages

  • SQL Server Express - Size limitations

    Hi,
    I am currently using the SQL Server Express edition with the intention of upgrading to the full version in the future.
    I know the size limitation on the Database is 4GB - however does this include the log file as well? Nothing I read says the log file is included however if anyone can give the definitive answer I would be grateful.
    Also my log file maximum is set to 2GB what issues are there with reducing this to say 500MB? I assume SQL Server automatically drops the old log data to add new data when the maximum is reached.
    Thanks for your help.
    Andrew

    The 4GB limit only applies to data files, not log.
    The maxsize for your log files should only be relevant if something in your workload is going to require more then the size so that the log will have to grow.  Ideally, your log is appropriately sized for your workload so that autogrow will not
    need to occur.
    Hello, am using ms sqlserver 2005 express edition..now it is giving out of memory exception while starting the service.. may i know the reason.. please help me.

  • SQL Server Express Performance Limitations With OGC Methods on Geometry Instances

    I will front load my question.  Specifically, I am wondering if any of the feature restrictions with SQL Server Express cause performance limitations/reductions with OGC methods on geometry instances, e.g., STIntersects?  I have spent time reading
    various documents about the different editions of SQL Server, including the Features Supported by the Editions of SQL Server 2014, but nothing is jumping out at me.  The
    limited information on spatial features in the aforementioned document implies spatial is the same across all editions.  I am hoping this is wrong.
    The situation....  I have roughly 200,000 tax parcels within 175 taxing districts.  As part of a consistency check between what is stored in tax records for taxing district and what is identified spatially, I set up a basic point-in-polygon query
    to identify the taxing district spatially and then count the number of parcels within in taxing district.  Surprisingly, the query took 66 minutes to run.  As I pointed out, this is being run on a test machine with SQL Server Express.
    Some specifics....  I wrote the query a few different ways and compared the execution plans, and the optimizer always choose the same plan, which is good I guess since it means it is doing its job.  The execution plans show a 'Clustered Index Seek
    (Spatial)' being used and only costing 1%.  Coming in at 75% cost is a Filter, which appears to be connected to the STIntersects predicate.  I brute forced alternate execution plans using HINTS, but they only turned out worse, which I guess is also
    good since it means the optimizer did choose a good plan.  I experimented some with changing the spatial index parameters, but the impact of the options I tried was never that much.  I ended up going with "Geometry Auto Grid" with 16 cells
    per object.
    So, why do I think 66 minutes is excessive?  The reason is that I loaded the same data sets into PostgreSQL/PostGIS, used a default spatial index, and the same query ran in 5 minutes.  Same machine, same data, SQL Server Express is 13x slower than
    PostgreSQL.  That is why I think 66 minutes is excessive.
    Our organization is mostly an Oracle and SQL Server shop.  Since more of my background and experience are with MS databases, I prefer to work with SQL Server.  I really do want to understand what is happening here.  Is there something I can
    do different to get more performance out of SQL Server?  Does spatial run slower on Express versus Standard or Enterprise?  Given I did so little tuning in PostgreSQL, I still can't understand the results I am seeing.
    I may or may not be able to strip the data down enough to be able to send it to someone.

    Tessalating the polygons (tax districts) is the answer!
    Since my use of SQL Server Express was brought up as possibly contributing to the slow runtime, the first thing I did was download an evaluation version of Enterprise Edition.  The runtime on Enterprise Edition dropped from 66 minutes to 57.5 minutes.
     A reduction of 13% isn't anything to scoff at, but total runtime was still 11x longer than in PostgreSQL.  Although Enterprise Edition had 4 cores available to it, it never really spun up more than 1 when executing the query, so it doesn't seem
    to have been parallelizing the query much, if at all.
    You asked about polygon complexity.  Overall, a majority are fairly simple but there are some complex ones with one really complex polygon.  Using the complexity index discussed in the reference thread, the tax districts had an average complexity
    of 4.6 and a median of 2.7.  One polygon had a complexity index of 120, which was skewing the average, as well as increasing the runtime I suspect.  Below is a complexity index breakdown:
    Index
    NUM_TAX_DIST
    1
    6
    <2
    49
    <3
    44
    <4
    23
    <5
    11
    <6
    9
    <7
    9
    <8
    4
    <9
    1
    <10
    4
    >=10
    14
    Before trying tessellation, I tweaked the spatial indexes in several different ways, but the runtimes never changed by more than a minute or two.  I reset the spatial indexes to "geometry auto grid @ 32" and tried out your tessellation functions
    using the default of 5000 vertices.  Total runtime 2.3 minutes, a 96% reduction and twice as fast as PostgresSQL!  Now that is more what I was expecting before i started.
    I tried using different thresholds, 3,000 and 10,000 vertices but the runtimes were slightly slower, 3.5 and 3.3 minutes respectively.  A threshold of 5000 definitely seems to be a sweet spot for the dataset I am using.  As the thread you referenced
    discussed, SQL Server spatial functions like STIntersect appear to be sensitive to the number of vertices of polygons.
    After reading your comment, it reminded me of some discussions with Esri staff about ArcGIS doing the same thing in certain circumstances, but I didn't go as far as thinking to apply it here.  So, thanks for the suggestion and code from another post.
     Once I realized the SRID was hard coded to 0 in tvf_QuarterPolygon, I was able to update the code to set it to the same as the input shape, and then everything came together nicely.

  • SQL Server 2012 - Capacity Limits by Edition

    I have a question about the capacity limits of SQL Server 2012.
    The documentation http://msdn.microsoft.com/en-us/library/ms143760.aspx points to an interesting limitation of the SQL Server 2012 Enterprise Edition:
    Enterprise Edition with Server + Client Access License (CAL) based licensing is limited to a maximum of 20 cores per SQL Server instance.
    This sounds like a technical limitation. How does the Enterprise Edition know that it is running in CAL based licensing and not in Core mode? I know that older Editions like SQL 2000 had a tool to set the license mode. Is there a similar tool available for
    SQL 2012?

    Hi Tron,
    Once the decision on the licensing vector is made, the DBA or organization must decide whether to license under the Server & CAL model, or under the Core-Based Licensing model. Server & CAL has been around for years, and simply refers to the model where
    one license applies for the server on which SQL Server is installed (note that multiple instances are allowed) and one CAL applies for the user or application service accessing the licensed SQL Server. One CAL is required for every user or application service
    regularly connecting to SQL Server.
    Please refer to the articles below for the details.
    Understanding the SQL Server 2012 Licensing Model
    http://www.mssqltips.com/sqlservertip/2942/understanding-the-sql-server-2012-licensing-model/
    Thanks
    Candy Zhou
    We
    are trying to better understand customer views on social support experience, so your participation in this
    interview project would be greatly appreciated if you have time.
    Thanks for helping make community forums a great place.

  • Access-SQL Server (Client Server Configuration) Best Way To Refresh SQL Server Records ?

    We are using Access 2013 as the front end and SQL Server 2014 as the back end to a client server configuration.
    Access controls are bound to the SQL fields with the same names. When using Access to create a new record in a Form, the data are not transferred to SQL if the form is exited to display a different Form or Access is closed. If the right or left arrow navigation
    buttons at the bottom of the form are first used to display either the previous or next record, then the data in the new record are correctly transferred to SQL.
    What is the best way to refresh the new SQL record prior to the closing of the new record in the bound Access form ? We have tried Requery of the entire form and with all of the individual controls without success. We are looking for a method of refreshing
    SQL that functions in a manner similar to that of what happens with the navigation buttons.
    Thank you very much for your assistance.
    Robert Robinson
    RERThird

    Hi Stefan,
    I had added the code to set me.dirty = False in response to the On Dirty event and didn't realize that it was working properly. I had tried other several approaches and must have become confused somewhere along the line.
    I retested the program. On Dirty is working and the problem is solved.
    Thank you very much for your assistance.
    Robert Robinson
    RERThird

  • Does a SQL Server record delete (row or multiple related rows) stand up to the official definition of record destruction for a system that needs to meet DOD 5015.2 requirements?

    I am being asked by our records manager about whether or not when a delete occurs from SQL Server that it can stand up in court as true if we declare that the record has been destroyed and is no longer available.  In other words, are there ways in
    which forensic experts could analyse the disk and actually recover records that have been deleted from a table or tables?  If so, then we would be in error if we declare that the records have been deleted.
    Steve

    Agree with Erland here deleted records can be read from transaction log and there are tools in market which can do it unless all Virtual log file( you can consider this as small log files) which were part of that transaction has been underwritten by new
    information.  If they are still there and VLF is not reutalized it can be read. I an not exactly sure about the mechanism how it does it
    Here you gently point out a not-so-small error in my post. I said that the data would be zeroed out after the log has been truncated. But that is not the case. As you say, the data will be there until the log has run around. Which can take a long time with
    a large log file.
    Erland I dont agree to fact that Ghost cleanup task would take minutes to just clean up things in case_large chunk data_ is deleted there are of factors involved and it can take more than couple of hour(in my case it did). Actually time frame is not definite
    Yes, you are correct. I wanted to diminsh the issue with the ghost records, since they may be the smallest issue in this case. The log file, and most of all the backups are a lot more significant.
    Erland Sommarskog, SQL Server MVP, [email protected]

  • Code Inspector - Record size limitation ???

    Hello all,
    I am running into a situation where an inspection results in large number of records.
    The user interface warns me that this is the case e.g. >40,000 records found.
    Question:
    1. How can I view all the issues associated with my object selection (assume 500 objects) ?
    That is, if the UI is limiting the display, to, say, the first 300 objects yielding 40,000 issues, how can I see the issues associated with the remaining 200 objects?
    2. Is this just a UI limitation?
    I have searched but found no answer to such questions.  Your help is appreciated.
    Best,  John

    Thank you for the suggestion.  I should have been more detailed in the nature of my problem...
    My object list comes from a reference to a very large transport with hundreds of objects.
    So it is not so straight forward for me to separate by objects.  It would take hand comparison of objects and hand building of object lists through visual strolling of results sorted by object.  EXTREMELY painful.
    Is there a better way? 
    Regards,  John

  • SQL Server Express checking DB Size

    Hello,
    I frequently connect to customers using Sage 300 with SQL Express. Many times for a quote I need to check the SQL database size.
    Many times I am connecting to a workstation and the customer either does not have access to their SQL server to access SQL Studio Manager and other times they just do not know their password.
    Is there a way to check the size of a DB from a workstation that is using local programs but accessing SQL on a server if I do not have access to the SQL Server, I do have the SQL server name, but often am unable to even see the SQLDATA.mdf's from the local
    machine ?
    Thanks,
    Debbie

    Hello,
    Please read about a few options:
    http://blogs.technet.com/b/heyscriptingguy/archive/2010/11/02/use-powershell-to-obtain-sql-server-database-sizes.aspx
    You can also use Performance Monitor tool (or System Monitor), inside the set of counters named “SQL Server: Databases” you will find
    the counter “Data File(s) Size (KB)”.
    Hope this helps.
    Regards,
    Alberto Morillo
    SQLCoffee.com

  • SQL Server 2005 performance decreases with DB size while SQL Server 2012 is fine

    Hi,
    We have a C# windows service running that polls some files and inserts/updates some fields in database.
    The service was tested on a local dev machine with SQL Server 2012 running and performance was quite decent with any number of records. Later on the service was moved to a test stage environment where SQL Server 2005 is installed. At that point database
    was still empty and service was running just fine but later on, after some 500k records were written, performance problems came to light. After some more tests we've founds out that, basically, database operation performance in SQL Server 2005 decreases with
    a direct correlation with the database size. Here are some testing results:
    Run#
    1
    2
    3
    4
    5
    DB size (records)
    520k
    620k
    720k
    820k
    920k
    SQL Server 2005
    TotalRunTime
    25:25.1
    32:25.4
    38:27.3
    42:50.5
    43:51.8
    Get1
    00:18.3
    00:18.9
    00:20.1
    00:20.1
    00:19.3
    Get2
    01:13.4
    01:17.9
    01:21.0
    01:21.2
    01:17.5
    Get3
    01:19.5
    01:24.6
    01:28.4
    01:29.3
    01:24.8
    Count1
    00:19.9
    00:18.7
    00:17.9
    00:18.7
    00:19.1
    Count2
    00:44.5
    00:45.7
    00:45.9
    00:47.0
    00:46.0
    Count3
    00:21.7
    00:21.7
    00:21.7
    00:22.3
    00:22.3
    Count4
    00:23.6
    00:23.9
    00:23.9
    00:24.9
    00:24.5
    Process1
    03:10.6
    03:15.4
    03:14.7
    03:21.5
    03:19.6
    Process2
    17:08.7
    23:35.7
    28:53.8
    32:58.3
    34:46.9
    Count5
    00:02.3
    00:02.3
    00:02.3
    00:02.3
    00:02.1
    Count6
    00:01.6
    00:01.6
    00:01.6
    00:01.7
    00:01.7
    Count7
    00:01.9
    00:01.9
    00:01.7
    00:02.0
    00:02.0
    Process3
    00:02.0
    00:01.8
    00:01.8
    00:01.8
    00:01.8
    SQL Server 2012
    TotalRunTime
    12:51.6
    13:38.7
    13:20.4
    13:38.0
    12:38.8
    Get1
    00:21.6
    00:21.7
    00:20.7
    00:22.7
    00:21.4
    Get2
    01:38.3
    01:37.2
    01:31.6
    01:39.2
    01:37.3
    Get3
    01:41.7
    01:42.1
    01:35.9
    01:44.5
    01:41.7
    Count1
    00:20.3
    00:19.9
    00:19.9
    00:21.5
    00:17.3
    Count2
    01:04.5
    01:04.8
    01:05.3
    01:10.0
    01:01.0
    Count3
    00:24.5
    00:24.1
    00:23.7
    00:26.0
    00:21.7
    Count4
    00:26.3
    00:24.6
    00:25.1
    00:27.5
    00:23.7
    Process1
    03:52.3
    03:57.7
    03:59.4
    04:21.2
    03:41.4
    Process2
    03:05.4
    03:06.2
    02:53.2
    03:10.3
    03:06.5
    Count5
    00:02.8
    00:02.7
    00:02.6
    00:02.8
    00:02.7
    Count6
    00:02.3
    00:03.0
    00:02.8
    00:03.4
    00:02.4
    Count7
    00:02.5
    00:02.9
    00:02.8
    00:03.4
    00:02.5
    Process3
    00:21.7
    00:21.0
    00:20.4
    00:22.8
    00:21.5
    One more thing is that it's not Process2 table that constantly grows in size but is Process1 table, that gets almost 100k records each run.
    After that SQL Server 2005 has also been installed on a dev machine just to test things and we got exactly the same results. Both SQL Server 2005 and 2012 instances are installed using default settings with no changes at all. The same goes for databases
    created for the service.
    So the question is - why are there such huge differences between performance of SQL Server 2005 and 2012? Maybe there are some settings that are set by default in SQL Server 2012 database that need to be set manually in 2005?
    What else can I try to test? The main problem is that production SQL Server will be updated god-knows-when and we can't just wait for that.
    Any suggestions/advices are more than welcome.

    ...One more thing is that it's not Process2 table that constantly grows in size but is
    Process1 table, that gets almost 100k records each run....
    Hi,
    It is not clear to me what is that you are doing, but now we have a better understanding on ONE of your tables an it is obviously you will get worse result as the data become bigger. Actually your table look like a automatic build table by ORM like Entity
    Framework, and it's DDL probably do not much your needs. For example if your select query is using a filter on the other column that [setID] then you have no index and the server probably have to scan the entire table in order to find the records that you
    need.
    Forum is a suitable place to seek advice about a specific system (as I mentioned before we are not familiar with your system), and it is more suitable for general questions. For example the fact that you have no index except the index on the column [setID]
    can indicate a problem. Ultimately to optimize the system will need to investigate it more thoroughly (as it is no longer appropriate forum ... but we're not there yet). Another point is that now we can see that you are using [timestamp] column, an this
    implies that your are using this column as a filter for selecting the data. If so, then maybe a better DDL will be to use clustered index on this column and if needed a nonclustered index on the [setID] if it is needed at all...
    what is obviously is that next is to check if this DDL fit
    your specific needs (as i mentioned before).
    Next step is to understand what action do you do with this table. (1) what is your query which become slowly in a bigger data set. (2) Are you using ORM (object relational mapping, like Entity Framework
    code first), and if so then which one.
    [Personal Site] [Blog] [Facebook]

  • How to calculate the HFM Cube size in SQL Server-2005

    Hi
    How to calculate the HFM Cube size in SQL Server-2005 ?
    Below query used for Oracle. Then what is query for SQL Server?
    SQL> select sum(bytes/1024/1024) from dba_segments where segment_name like 'FINANCIAL_%' and owner='HFM';
    SUM(BYTES/1024/1024)
    SQL> select sum(bytes/1024/1024) from dba_segments where segment_name like 'HSV FINANCIAL%' and owner='HFM';
    SUM(BYTES/1024/1024)
    Regards
    Smilee

    What is your objective? The subcube in HFM is a concept which applies to the application tier - not so much to the database tier. The size of the subcube is the unique number of data strips (data values for January - December inclusive, for example) for the given entity, currency triplet or Parent.Child node. You have to account for parent accounts and customs which don't exist in the database but are generated in RAM in the application tier.
    So, if your objective is to find the largest subcubes, you could do this by querying the database and counting the number of records per entity/value (DCE tables) or parent.child entity combination (DCN tables). I'm not versed in SQL, but I think the script below would just tell you the schema size and not the subcube sizes.
    Check out Accelatis.com for a third party software product that can do this for you. The feature is called the Subcube Analyzer and was written by the same team that wrote HFM, so they ought to know how this works :-)
    --chris                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • What would be best approach to migrate millions of records from on premise SQL server to Azure SQL DB?

    Team,
    In our project, we have a requirement of data migration. We have following scenario and I really appreciate any suggestion from you all on implementation part of it.
    Scenario:
    We have millions of records to be migrated to destination SQL database after some transformation.
    The source SQL server is on premise in partners domain and destination server is in Azure.
    Can you please suggest what would be best approach to do so.
    thanks,
    Bishnu
    Bishnupriya Pradhan

    You can use SSIS itself for this
    Have a batch logic which will identify data batches within source and then include data flow tasks to do the data transfer to Azure. The batch size chosen should be as per buffer meory availability + parallel tasks executing etc.
    You can use ODBC or ADO .NET connection to connect to azure.
    http://visakhm.blogspot.in/2013/09/connecting-to-azure-instance-using-ssis.html
    Please Mark This As Answer if it solved your issue
    Please Vote This As Helpful if it helps to solve your issue
    Visakh
    My Wiki User Page
    My MSDN Page
    My Personal Blog
    My Facebook Page

  • Checking Maximum size limit of SQL-SERVER database.

    It is said that EXPRESS version of SQL server has SIZE_limit.. How can i check what size limit my database has using Management_studio_express??

    It is said that EXPRESS version of SQL server has SIZE_limit.. How can i check what size limit my database has using Management_studio_express??
    Yes it has limitations regarding CPU ,memory and database size.You can refer to below link
    http://msdn.microsoft.com/en-us/library/cc645993.aspx
    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers
    How do i check what is the size limit of my database??
    Common please do some basic reading before asking question.I am not discouraging you from posting but search not net how to find size of my SQL Server database you will get lots of link.Max size of your database will be max size supported by edition of SQL
    Server in your case since it is express 2008 r2 it will be 10 G.For current size please do as advised above
    Please mark this reply as the answer or vote as helpful, as appropriate, to make it useful for other readers

  • SQL server error log size is too big to handle

    I am working with a large database on windows sql server 2008 R2 such that it has to run continuously 24x7 because of that it is not possible
    to restart the server time to time. It is kind of monitoring system for big machines. Because of this SQL server error logs are growing too big even some times up to 60-70 GB at a limited sized hard drive. I can't delete them time to time manually. Can someone
    please suggest a way using which I can stop creation of such error logs or recycle them after sometime. Most of the errors are of this kind --
    Setting database option RECOVERY to simple for database db_name
    P.S.- I have read limiting error logs to 6 etc. But that didn't help. It will be best if you could suggest some method to disable these logs.

    Hi Mohit11,
    According to your description, your SQL Server error logs are growing too big to handle at a limited sized hard drive, and you want to know how to stop the generation of such error logs or recycle them after sometime automatically without restarting the
    SQL Server, right?
    As others mentioned above, we may not be able to disable SQL server error log generation. However we can recycle the error logs automatically by running the
    sp_cycle_errorlog on a fixed schedule (i.e. every two weeks) using SQL agent jobs so that the error logs will be recycled
    automatically without restarting SQL Server.
    And it is also very important for us to keep the error log files more readable. So we can increase the number of error logs a little more and run the sp_cycle_errorlog more frequently (i.e. daily), then each file will in a smaller size to be more readable
    and we can recycle the log files automatically.
    In addition, in order to avoid the size of all the log files growing into a too big size unexpected (sometime it may happen), we can run the following query in SQL Agent job to automatically delete all the old log files when the size of log files is larger
    than some value we want to keep (i.e. 30GB):
    --create a tample table to gather the information of error log files
    CREATE TABLE #ErrorLog
    Archieve INT,
    Dt DATETIME,
    FileSize INT
    GO
    INSERT INTO #ErrorLog
    EXEC xp_enumerrorlogs
    GO
    --delete all the old log files if the size of all the log files is larger than 30GB
    DECLARE @i int = 1;
    DECLARE @Log_number int;
    DECLARE @Log_Max_Size int = 30*1024; --here is the max size (M) of all the error log files we want to keep, change the value according to your requirement
    DECLARE @SQLSTR VARCHAR(1000);
    SET @Log_number = (SELECT COUNT(*) FROM #ErrorLog);
    IF (SELECT COUNT(FileSize/1024/1024) FROM #ErrorLog) >= @Log_Max_Size
    BEGIN
    WHILE @i <= @Log_number
    BEGIN
    SET @SQLSTR = 'DEL C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\Log\ERRORLOG.' + CONVERT(VARCHAR,@i);
    EXEC xp_cmdshell @SQLSTR;
    SET @i =@i + 1;
    END
    END
    DROP TABLE #ErrorLog
    For more information about How to manage the SQL Server error log, please refer to the following article:
    http://support.microsoft.com/kb/2199578
    If you have any question, please feel free to let me know.
    Regards,
    Jerry Li

  • SQL Server Express - Hardware (cores) limitations ?

    You find a lot of information about the physical  limitations of SQL Server Express engine, but we have a question regarding the "licensing" (even when its for free ;o)))
    Is it allowed to install a SQL Server Express on a physical machine with 8 cores ?
    And if it's allowed, is it allowed for SQL Server 2008, 2008 R2 and 2012 version ? Or are there differences because of the rather big changes in licensing model in SQL server 2012 ?

    Hi,
    The Express edition is free and does not require license. However, it doesn't have all of the features of Standard. There are some limitations such as database size (10GB) and CPUs (1 socket or 4 cores) and SQL Memory (1GB).
    You can refer to the following web page for more information:
    Compute Capacity Limits by Edition of SQL Server
    http://technet.microsoft.com/en-us/library/ms143760.aspx
    Features Supported by the Editions of SQL Server 2012
    http://msdn.microsoft.com/en-us/library/cc645993.aspx
    Thanks.
    Tracy Cai
    TechNet Community Support

  • Best size for VM SQL Server for a start or maybe Azure SQL Database?

    Hello Everyone,
    My question is quite general but some of You may have already considered similar issues. Let's say or assume I have the website and mobile application - social kind - used by:
    1. 10 K users
    2. 100 K users
    3. 1 K K users
    4. 20 K K users MAX
    I am strongly considering use of VM SQL Server mostly because of its PARTITIONING functionality, some advantages in the context of INDEXING compared to Azure SQL Database and JOBs availibility.
    Question: what would be the best size and count of VMs related to SQL Server for the start and later?
    Question 2 - I am able to move JOBs functionality to my Worker role if I wanted to consider use of Azure SQL Database. SQL Server still has at least those 2 advantages (Partitioning, better Indexing) - is it worth considering Azure SQL Database service for
    100 K users and more?
    Regards,

    Hello Jambor,
    Considering the number of users i wouldnt recommend an Azure SQL database, because of certain limitations like Max worker threads, Max sessions, etc. Also the maximum size of the database can be 500GB.
    Please refer http://msdn.microsoft.com/en-us/library/azure/dn741336.aspx and http://msdn.microsoft.com/en-us/library/azure/dn369873.aspx for the Azure SQL database options and limitations.
    If your application needs to scale-out, Azure SQL Databases are recommended. If it needs scaling-up - then the choice is SQL Server on Azure VM. Please refer http://azure.microsoft.com/blog/2012/06/26/data-series-sql-server-in-windows-azure-virtual-machine-vs-sql-database/
    and http://azure.microsoft.com/en-us/documentation/articles/data-management-azure-sql-database-and-sql-server-iaas/ for comparison.
    If you already have your application running successfully on your on-premise server, you could use a similar machine on Azure for hosting your SQL server instance. The options are at http://azure.microsoft.com/en-us/pricing/details/virtual-machines/. You could
    start with a SQL Server enterprise edition on an A3 VM and then scale-up as needed.
    Regards,
    Kumar Bijayanta

Maybe you are looking for

  • Automatic Generation of Settlement Rules for PM order

    Hi All, I am doing a maintenance orders settlement and I have the fallowing issue: My client has 2 types of equipment, namely vehicles and industrial equipment. Both have particular rules. For vehicles, we need to control all cost by an internal orde

  • Prevention of posting invoices twice

    Hi All I need suggestions and solutions on the below scenario, where client wants to have the check for Duplicate Vendor invoice posting with below conditions 1.If only reference number and amt are matching. 2.Also a warning must appear when same ven

  • Manual for Element 11

    I just paid $100.00 for the Element 11 software. Why is there NOT a manual included?? Sure it comes with the "get stated pamplet", but I need something to read as I edit photos. To download and copy from PC will eat up my ink>> ( over 400 pages)

  • Why do filler colors change when the orientation changes?

    Hi, I'm working with a template but don't like the colors of the fill boxes, so I have manually changed them.  I've noticed, however, at least when look at a proof that when I change the orientation of the iPad, the filler color reverts to the origin

  • How to set a value of a hidden form element

    I want to add a hidden element to form and set the value from javascript before calling submit(). My javascript function function getBdSubmit() alert('getBdSubmit function called'); document.forms.DataForm.event.value="getBd"; document.forms.DataForm