Curious as to your table storage params

Hi everyone, I'm a long time Oracle guy, first time poster to these forums! I'm just curious as to your table storage parameters. For instance, do you use settings such as initial 128k next 128k pctincrease 0 for production tables after analysis? Or do you crank those settings way up, even "past" your projected usage? Anyone using megabytes for their table extents, such as inital 100M next 50M ? Or is that considered bad practice to project 10-15 years into the future?
Basically I'm re-doing the storage structure of the database that our 3rd party supplier set up; they set everything up with 32k initial extents, 8k subsequents, 121 max extents. (Sounds like Oracle 7!) Since that gives us only a meg of storage in a 2 GB table space, I'm currently analyzing our sample data to see what our requirements will be.
Actually does anyone mess with kilobyte extents anymore? Or do you go right to megabyte extents?
Thanks!
-Thomas H

Thomas:
It has always been best practice to size your objects appropriately, and to take all possible steps to ensure that tablespaces do not become fragmented. With LMTs, it is just that much easier, because users cannot violate the storage parameters you set up at the tablespace level.
With DMTs, a user can specify storage parameters that differ from the tablespace defaults and screw up your carefull analysis. with LMTs, any user supplied storage parameters are effectively ignored (well, they impact the number of extents initially allocated to the object, but not the size of those extents).
That said, I would do at least some minimal analysis of the space required for each of the tables and indexes. If, like most OLTP databases, you have a wide variety of object sizes, you can somewhat optimize the disk usage by keeping like sized objects together in one tablespace.
For example, in one (payroll/HR) application I support we have 3 LMT tablespaces, small, medium, and large.
Small holds the hundreds of small (2 - ~1,000 rows) lookup tables and their indexes, and has uniform extents of 64K
Medium holds the dozens of larger tables like employee demographics, job history etc. (~1,000 - ~1,000,000 rows) and indexes on those tables, and some of the large tables and has uniform extents of 5M.
Large holds the 5 or 6 extremely large tables like the detailed daily pay history (> 45,000,000 rows) and some of their larger indexes. This has uniform extents of 100M.
HTh
John

Similar Messages

  • Migrating Azure table storage content to On prem SQl database

    hi 
    is it possible to import azure table storage data to On prem Sql database ?

    Hi
    You cannot do it directly from SQL or Azure Storage Explorer.
    But you can have a little application that extracts your table storage data into some CSV file like this:
    http://blogs.msdn.com/b/jmstall/archive/2012/08/03/converting-between-azure-tables-and-csv.aspx
    And then import the CSV file that you generate into your on-prem sql database
    Regards
    Aram
    Aram Koukia

  • Is this the best design for asynchronous notifications (such as email)? Current design uses Web Site, Azure Service Bus Queue, Table Storage and Cloud Service Worker Role.

    I am asking for feedback on this design. Here is an example user story:
    As a group admin on the website I want to be notified when a user in my group uploads a file to the group.
    Easiest solution would be that in the code handling the upload, we just directly create an email message in there and send it. However, this seems like it isn't really the appropriate level of separation of concerns, so instead we are thinking to have a separate
    worker process which does nothing but send notifications. So, the website in the upload code handles receiving the file, extracting some metadata from it (like filename) and writing this to the database. As soon as it is done handling the file upload it then
    does two things: Writes the details of the notification to be sent (such as subject, filename, etc...) to a dedicated "notification" table and also creates a message in a queue which the notification sending worker process monitors. The entire sequence
    is shown in the diagram below.
    My questions are: Do you see any drawbacks in this design? Is there a better design? The team wants to use Azure Worker Roles, Queues and Table storage. Is it the right call to use these components or is this design unnecessarily complex? Quality attribute
    requirements are that it is easy to code, easy to maintain, easy to debug at runtime, auditable (history is available of when notifications were sent, etc...), monitor-able. Any other quality attributes you think we should be designing for?
    More info:
    We are creating a cloud application (in Azure) in which there are at least 2 components. The first is the "source" component (for example a UI / website) in which some action happens or some condition is met that triggers a second component or "worker"
    to perform some job. These jobs have details or metadata associated with them which we plan to store in Azure Table Storage. Here is the pattern we are considering:
    Steps:
    Condition for job met.
    Source writes job details to table.
    Source puts job in queue.
    Asynchronously:
    Worker accepts job from queue.
    Worker Records DateTimeStarted in table.
    Queue marks job marked as "in progress".
    Worker performs job.
    Worker updates table with details (including DateTimeCompleted).
    Worker reports completion to queue.
    Job deleted from queue.
    Please comment and let me know if I have this right, or if there is some better pattern. For example sake, consider the work to be "sending a notification" such as an email whose template fields are filled from the "details" mentioned in
    the pattern.

    Hi,
    Thanks for your posting.
    This development mode can exclude some errors, such as the file upload complete at the same time... from my experience, this is a good choice to achieve the goal.
    Best Regards,
    Jambor  
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • [Urgent] Interruption of access to the Table Storage

    Hi,
    We have a code that takes a record from Table Storage (placed below).
    This works perfect but since some moment it is always return null.
    But using VS tools I can see there are records in table.
    Please help us to solve this issue asap.
    StorageCredentials creds = new StorageCredentials(storageName, key);
    CloudStorageAccount account = new CloudStorageAccount(creds, useHttps: true);
    CloudTableClient client = account.CreateCloudTableClient();
    table = client.GetTableReference("ClientMetadata");
    var query = new TableQuery<AzMetadataItem>().Take(1);
    res = await table.ExecuteQuerySegmentedAsync<AzMetadataItem>(query, null).ConfigureAwait(false);
    metadata = (AzMetadataItem)res.FirstOrDefault();

    Segmented queries can return less number of records than your app requested. You have to use the continuation token returned by the segmented query and use it for the next segmented query request until you get all the records you need.

  • How to improve performance for Azure Table Storage bulk loads

    Hello all,
    Would appreciate your help as we are facing a challenge.
    We are tried to bulk load Azure table storage. We have a file that contains nearly 2 million rows.
    We would need to reach a point where we could bulk load 100000-150000 entries per minute. Currently, it takes more than 10 hours to process the file..
    We have tried Parallel.Foreach but it doesn't help. Today I discovered Partitioning in PLINQ. Would that be the way to go??
    Any ideas? I have spent nearly two days in trying to optimize it using PLINQ, but still I am not sure what is the best thing to do.
    Kindly, note that we shouldn't be using SQL/Azure SQL for this.
    I would really appreciate your help.
    Thanks

    I'd think you're just pooling the parallel connections to Azure, if you do it on one system.  You'd also have a bottleneck of round trip time from you, through the internet to Azure and back again.
    You could speed it up by moving the data file to the cloud and process it with a Cloud worker role.  That way you'd be in the datacenter (which is a much faster, more optimized network.)
    Or, if that's not fast enough - if you can split the data so multiple WorkerRoles could each process part of the file, you can use the VM's scale to put enough machines to it that it gets done quickly.
    Darin R.

  • Azure table storage rest API including

    How do I access my table storage using REST API. 
    Any example would be appreciated including enabling REST API.

    Hi,
    Please have a look at this article:
    http://blogs.msdn.com/b/tconte/archive/2011/08/10/accessing-windows-azure-blob-storage-using-jquery.aspx, hope it helps. We could also consider use
    Jquery to call codebehind to do some operation about Azure table storeage, if so we could choose azure SDK or azure storage rest API to do this.
    Best Regards,
    Jambor
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • How to select data from AZure table storage without row key and partition key

    Hi 
    I need to select a data from azure table storage without rowkey and partition key. how  in azure storage emulator click query it display all data from that table. 
    thanks 
    rajesh 

    Hi rajesh,
    It seems that you didn't click query data using storage emulator. But I recommend you could use the azure server explore in your VS to view your data and query data. Please see this document (http://msdn.microsoft.com/en-us/library/azure/ff683677.aspx).
    And base on my experience, you may need input the command on Azure storage emulator, such as this page(http://msdn.microsoft.com/en-us/library/azure/gg433005.aspx).
    Regards,
    Will
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Azure table storage design for simple social networking

    What is the best table design if I want to use Azure Table Service for a simple social networking website?
    The website could have millions of users.
    Users need to be able to view a list of all other users in the system sorted by the number of mutual connections.
    Users must be able to view a list of their connections
    User must be able to view content posted by themselves and their connections.
    One major design constraint is that Azure table service queries are generally limited to the partition key and row key when there are a large number of records or else they get really slow. Another constraint is that query results are only sorted by the
    partition key and then the row key.

    For your scenario, I think making use of the SQL Azure makes more senses than usage of azure table storage service offering, nature of the data looks more relational in this particular context which is not recommended for table storage model design.
    You can get started with SQL Azure at -
    http://azure.microsoft.com/en-us/services/sql-database/
    Bhushan | Blog |
    LinkedIn | Twitter

  • Table storage got 4 times slower from day to day

    Hi, I've run tasks which added data to the table storage on my cloudapp each night for the last months, but since Friday 13th (yeah 13th, thats what I thought), the tasks have been executing 4 times slower.
    The exact Work being done, is around 250,000 calls to 'ExecuteBatch' split up on three workers - thursday I got around 20 ops, and since Friday I've been getting around 5 ops. See the storage graph below. It clearly shows that the requests has dropped
    significantly...
    Does the azure table storage limit bandwidth in some cases, or what could have happened?
    Has anyone experienced this or something similar before?

    Hi APMadsen,
    I guess the azure table storage may be affected by larger and larger data. If so, I suggest you could pay attention to your query code and spend on the query code optimization (refer to this thread
    http://social.msdn.microsoft.com/Forums/windowsazure/en-US/5326d280-513f-47a3-826d-2db97ebd9ace/why-is-this-azure-table-storage-query-so-slow). Also, I suggest you could refer to this big data sample (http://www.troyhunt.com/2013/12/working-with-154-million-records-on.html
    ) and this blog (http://robertgreiner.com/2012/06/why-is-azure-table-storage-so-slow/ ).
    Hope it helps.
    Will
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • Performing a case insensitive table storage query: Storage client 3.0

    I'm using Storage Client 3.0 and performing queries against entity properties such as a string value for email address.
    I'm using the TableQuery class and my query looks something like this:
    CloudTable accountsTable = tableClient.GetTableReference(Settings.AccountTable);
    TableQuery<Account> rangeQuery = new TableQuery<Account>().Where(
    TableQuery.CombineFilters(
    TableQuery.GenerateFilterCondition("PartitionKey", QueryComparisons.Equal, "account"),
    TableOperators.And,
    TableQuery.GenerateFilterCondition("Email", QueryComparisons.Equal, email)));
    var results = accountsTable.ExecuteQuery(rangeQuery).ToList();
    This works fine when the entity I'm searching for matches the case of my search term.  But how can I perform a case insensitive search?  I think this was possible when using linq queries but how do I accomplish this in the new storage client?
    thanks

    Hi,
    As far as I know, azure table storage didn't support some linq operator, such as this page  (http://msdn.microsoft.com/en-us/library/dd135725.aspx) shown. You could use some common
    linq methods, as those sample (http://blogs.msdn.com/b/kylemc/archive/2010/11/22/windows-azure-table-storage-linq-support.aspx).
    Also, I suggest you could refer to same thread (http://stackoverflow.com/questions/8805759/azure-table-storage-query-in-net-with-property-names-unknown-at-design-time).
    You could change your code to linq format and try it.
    Thanks.
    Any question, please let me know.
    Thanks.
    Will
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • You must register your tables using JDDI before you request them

    Hey Guys....
    I'm trying to query a custom table created on a SQL Server 2005 database, the same RDBMS for the SAP Web AS. I'm using a different database from the sytem's standard.
    I created a Datasource for it. I can get the connection, but when I execute a query I get the following message.
    Please note that I used everything SAP recommends for JDBC connector. I have done this previously for an oracle database but this time this approach (MS SQL) is not working...
    I don't understand what it means for registering the tables using JDDI.
    Could you please show me some light?
    java.rmi.RemoteException: com.sap.engine.services.dbpool.exceptions.BaseRemoteException: SQL statement(s) cannot be executed over DataSource "ZPORTAL". If you are using an Open SQL DataSource, you must register your tables using JDDI before you request them. Reason: java.sql.SQLException: [NWMss][SQLServer JDBC Driver][SQLServer]Invalid object name 'BC_DDDBTABLERT'. [id = Unknown]
         at com.sap.engine.services.dbpool.deploy.DataSourceManagerImpl.executeFromAppThread(DataSourceManagerImpl.java:1201)
         at com.sap.engine.services.dbpool.deploy.DataSourceManagerImpl.executeInitStatements(DataSourceManagerImpl.java:532)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:324)
         at com.sap.pj.jmx.introspect.DefaultMBeanInvoker.invoke(DefaultMBeanInvoker.java:58)
         at com.sap.pj.jmx.mbeaninfo.AdditionalInfoProviderMBean.invoke(AdditionalInfoProviderMBean.java:289)
         at com.sap.pj.jmx.server.MBeanServerImpl.invoke(MBeanServerImpl.java:944)
         at com.sap.pj.jmx.server.interceptor.MBeanServerWrapperInterceptor.invoke(MBeanServerWrapperInterceptor.java:288)
         at com.sap.engine.services.jmx.CompletionInterceptor.invoke(CompletionInterceptor.java:409)
         at com.sap.pj.jmx.server.interceptor.BasicMBeanServerInterceptor.invoke(BasicMBeanServerInterceptor.java:277)
         at com.sap.jmx.provider.ProviderInterceptor.invoke(ProviderInterceptor.java:258)
         at com.sap.engine.services.jmx.RedirectInterceptor.invoke(RedirectInterceptor.java:340)
         at com.sap.pj.jmx.server.interceptor.MBeanServerInterceptorChain.invoke(MBeanServerInterceptorChain.java:330)
         at com.sap.engine.services.jmx.MBeanServerSecurityWrapper.invoke(MBeanServerSecurityWrapper.java:287)
         at com.sap.engine.services.jmx.MBeanServerInvoker.invokeMbs(MBeanServerInvoker.java:131)
         at com.sap.engine.services.jmx.ClusterInterceptor.invokeMbs(ClusterInterceptor.java:212)
         at com.sap.engine.services.jmx.ClusterInterceptor.invoke(ClusterInterceptor.java:766)
         at com.sap.engine.services.jmx.MBeanServerInterceptorInvoker.invokeMbs(MBeanServerInterceptorInvoker.java:102)
         at com.sap.engine.services.jmx.connector.p4.P4ConnectorServerImpl.invokeMbs(P4ConnectorServerImpl.java:61)
         at com.sap.engine.services.jmx.connector.p4.P4ConnectorServerImplp4_Skel.dispatch(P4ConnectorServerImplp4_Skel.java:64)
         at com.sap.engine.services.rmi_p4.DispatchImpl._runInternal(DispatchImpl.java:319)
         at com.sap.engine.services.rmi_p4.DispatchImpl._run(DispatchImpl.java:200)
         at com.sap.engine.services.rmi_p4.server.P4SessionProcessor.request(P4SessionProcessor.java:136)
         at com.sap.engine.core.service630.context.cluster.session.ApplicationSessionMessageListener.process(ApplicationSessionMessageListener.java:33)
         at com.sap.engine.core.cluster.impl6.session.MessageRunner.run(MessageRunner.java:41)
         at com.sap.engine.core.thread.impl3.ActionObject.run(ActionObject.java:37)
         at java.security.AccessController.doPrivileged(Native Method)
         at com.sap.engine.core.thread.impl3.SingleThread.execute(SingleThread.java:102)
         at com.sap.engine.core.thread.impl3.SingleThread.run(SingleThread.java:172)
    Caused by: java.sql.SQLException: [NWMss][SQLServer JDBC Driver][SQLServer]Invalid object name 'BC_DDDBTABLERT'.
         at com.sap.dictionary.database.catalog.XmlCatalogReader.getTable(XmlCatalogReader.java:98)
         at com.sap.sql.catalog.impl.BufferedCatalogReader.getTable(BufferedCatalogReader.java:126)
         at com.sap.sql.catalog.impl.BufferedCatalogReader.getTable(BufferedCatalogReader.java:89)
         at com.sap.sql.sqlparser.CheckColAndTabVisitor.checkTabs(CheckColAndTabVisitor.java:247)
         at com.sap.sql.sqlparser.CheckColAndTabVisitor.performCatalogChecks(CheckColAndTabVisitor.java:170)
         at com.sap.sql.sqlparser.CommonSQLStatement.checkSemantics(CommonSQLStatement.java:184)
         at com.sap.sql.jdbc.common.StatementAnalyzerImpl.check(StatementAnalyzerImpl.java:42)
         at com.sap.sql.jdbc.common.StatementAnalyzerImpl.preprepareStatement(StatementAnalyzerImpl.java:126)
         at com.sap.sql.jdbc.common.StatementAnalyzerImpl.preprepareStatement(StatementAnalyzerImpl.java:109)
         at com.sap.sql.jdbc.common.CommonStatementImpl.execute(CommonStatementImpl.java:217)
         at com.sap.engine.services.dbpool.wrappers.StatementWrapper.execute(StatementWrapper.java:167)
         at com.sap.engine.services.dbpool.deploy.DBInitializer.run(DBInitializer.java:69)
         ... 4 more

    I changed the SQL Engine parameter to "Vendor SQL" under Additional tab of my datasource configuration

  • HT1338 What is the best online storage for photos. Specifically one that allows the original image quality to be downloaded should your hard storage goes belly up

    What is the best online storage for photos. Specifically one that allows the original image quality to be downloaded should your hard storage goes belly up

    I'd put them on an external hard drive(s) and burn them to a DVD as well (at least 2 - 3 copies on different drives/media); I prefer having control and a local solution instead of relying on a server and the possibility of someone (who shouldn't be)  downloading my work.

  • HT4847 You have now exceeded your iCloud storage, including an additional amount provided to allow you to continue receiving email. As a result, you will not be able to send or receive new email messages with your iCloud email address until you free up st

    You have now exceeded your iCloud storage, including an additional amount provided to allow you to continue receiving email. As a result, you will not be able to send or receive new email messages with your iCloud email address until you free up storage space or buy more storage. I have 20GB remaining.  WHat is the issue here?

    Today I received the same message but have 4.6gb available from a total of 5.0gb. I also received the same message when i first set up my icloud account and icloud was virtually empty. Unfortunately the only way i can see to contact apple is to pay for a telephone call. If anyone knows what is going on i would appreciate knowing.

  • Oracle Table Storage Parameters - a nice reading

    bold Gony's reading excercise for 07/09/2009 bold -
    The below is from the web source http://www.praetoriate.com/t_%20tuning_storage_parameters.htm. Very good material.The notes refers to figures and diagrams which cannot be seen below. But the text below is ver useful.
    Let’s begin this chapter by introducing the relationship between object storage parameters and performance. Poor object performance within Oracle is experienced in several areas:
    Slow inserts Insert operations run slowly and have excessive I/O. This happens when blocks on the freelist only have room for a few rows before Oracle is forced to grab another free block.
    Slow selects Select statements have excessive I/O because of chained rows. This occurs when rows “chain” and fragment onto several data blocks, causing additional I/O to fetch the blocks.
    Slow updates Update statements run very slowly with double the amount of I/O. This happens when update operations expand a VARCHAR or BLOB column and Oracle is forced to chain the row contents onto additional data blocks.
    Slow deletes Large delete statements can run slowly and cause segment header contention. This happens when rows are deleted and Oracle must relink the data block onto the freelist for the table.
    As we see, the storage parameters for Oracle tables and indexes can have an important effect on the performance of the database. Let’s begin our discussion of object tuning by reviewing the common storage parameters that affect Oracle performance.
    The pctfree Storage Parameter
    The purpose of pctfree is to tell Oracle when to remove a block from the object’s freelist. Since the Oracle default is pctfree=10, blocks remain on the freelist while they are less than 90 percent full. As shown in Figure 10-5, once an insert makes the block grow beyond 90 percent full, it is removed from the freelist, leaving 10 percent of the block for row expansion. Furthermore, the data block will remain off the freelist even after the space drops below 90 percent. Only after subsequent delete operations cause the space to fall below the pctused threshold of 40 percent will Oracle put the block back onto the freelist.
    Figure 10-83: The pctfree threshold
    The pctused Storage Parameter
    The pctused parameter tells Oracle when to add a previously full block onto the freelist. As rows are deleted from a table, the database blocks become eligible to accept new rows. This happens when the amount of space in a database block falls below pctused, and a freelist relink operation is triggered, as shown in Figure 10-6.
    Figure 10-84: The pctused threshold
    For example, with pctused=60, all database blocks that have less than 60 percent will be on the freelist, as well as other blocks that dropped below pctused and have not yet grown to pctfree. Once a block deletes a row and becomes less than 60 percent full, the block goes back on the freelist. When rows are deleted, data blocks become available when a block’s free space drops below the value of pctused for the table, and Oracle relinks the data block onto the freelist chain. As the table has rows inserted into it, it will grow until the space on the block exceeds the threshold pctfree, at which time the block is unlinked from the freelist.
    The freelists Storage Parameter
    The freelists parameter tells Oracle how many segment header blocks to create for a table or index. Multiple freelists are used to prevent segment header contention when several tasks compete to INSERT, UPDATE, or DELETE from the table. The freelists parameter should be set to the maximum number of concurrent update operations.
    Prior to Oracle8i, you must reorganize the table to change the freelists storage parameter. In Oracle8i, you can dynamically add freelists to any table or index with the alter table command. In Oracle8i, adding a freelist reserves a new block in the table to hold the control structures. To use this feature, you must set the compatible parameter to 8.1.6 or greater.
    The freelist groups Storage Parameter for OPS
    The freelist groups parameter is used in Oracle Parallel Server (Real Application Clusters). When multiple instances access a table, separate freelist groups are allocated in the segment header. The freelist groups parameter should be set the number of instances that access the table. For details on segment internals with multiple freelist groups, see Chapter 13.
    NOTE: The variables are called pctfree and pctused in the create table and alter table syntax, but they are called PCT_FREE and PCT_USED in the dba_tables view in the Oracle dictionary. The programmer responsible for this mix-up was promoted to senior vice president in recognition of his contribution to the complexity of the Oracle software.
    Summary of Storage Parameter Rules
    The following rules govern the settings for the storage parameters freelists, freelist groups, pctfree, and pctused. As you know, the value of pctused and pctfree can easily be changed at any time with the alter table command, and the observant DBA should be able to develop a methodology for deciding the optimal settings for these parameters. For now, accept these rules, and we will be discussing them in detail later in this chapter.
    There is a direct trade-off between effective space utilization and high performance, and the table storage parameters control this trade-off:
    For efficient space reuse A high value for pctused will effectively reuse space on data blocks, but at the expense of additional I/O. A high pctused means that relatively full blocks are placed on the freelist. Hence, these blocks will be able to accept only a few rows before becoming full again, leading to more I/O.
    For high performance A low value for pctused means that Oracle will not place a data block onto the freelist until it is nearly empty. The block will be able to accept many rows until it becomes full, thereby reducing I/O at insert time. Remember that it is always faster for Oracle to extend into new blocks than to reuse existing blocks. It takes fewer resources for Oracle to extend a table than to manage freelists.
    While we will go into the justification for these rules later in this chapter, let’s review the general guidelines for setting of object storage parameters:
    Always set pctused to allow enough room to accept a new row. We never want to have a free block that does not have enough room to accept a row. If we do, this will cause a slowdown since Oracle will attempt to read five “dead” free blocks before extending the table to get an empty block.
    The presence of chained rows in a table means that pctfree is too low or that db_block_size is too small. In most cases within Oracle, RAW and LONG RAW columns make huge rows that exceed the maximum block size for Oracle, making chained rows unavoidable.
    If a table has simultaneous insert SQL processes, it needs to have simultaneous delete processes. Running a single purge job will place all of the free blocks on only one freelist, and none of the other freelists will contain any free blocks from the purge.
    The freelist parameter should be set to the high-water mark of updates to a table. For example, if the customer table has up to 20 end users performing insert operations at any time, the customer table should have freelists=20.
    The freelist groups parameter should be set the number of Real Application Clusters instances (Oracle Parallel Server in Oracle8i) that access the table.

    sb92075 wrote:
    goni ,
    Please let go of 20th century & join the rest or the world in the 21st century.
    Information presented is obsoleted & can be ignored when using ASSM & ASSM is default with V10 & V11I said the same over here for the exactly same thread, not sure what the heck OP is upto?
    Oracle Table Storage Parameters - a nice reading
    regards
    Aman....

  • 403 Error when access Table Storage using SAS token

    I have Azure Mobile Service which has a custom API to generate a sas token for accessing Table Storage from Windows Store app.
    I get following error in Windows Store app while accessing table storage using sas token:
    Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
    Example of sas token generated:
    se=2014-09-12T03%3A10%3A00Z&sp=rw&spk=MicrosoftAccount%3A005d92ef08ec5d83081afed1e08641d2&epk=MicrosoftAccount%3A005d92ef08ec5d83081afed1e08641d2&sv=2014-02-14&tn=Folders&sig=91c7S1QM0byNdM80JncwRribXqsWS1iKmOH8cRvHWhQ%3D
    Azure Mobile Services API Code that generates sas token:
    exports.get = function(request, response) {
    var azure = require('azure-storage');
    var accountName = 'myAccountName';
    var accountKey = 'myAccountKey';
    var host = accountName + '.table.core.windows.net';
    var tableService = azure.createTableService(accountName, accountKey, host);
    var sharedAccessPolicy = {
    AccessPolicy: {
    Permissions: 'rw', //Read and Write permissions
    Expiry: dayFromNow(1),
    StartPk: request.user.userId,
    EndPk: request.user.userId
    var sasToken = tableService.generateSharedAccessSignature('myTableName', sharedAccessPolicy);
    response.send(statusCodes.OK, { sasToken : sasToken });
    function dayFromNow(days){
    var result = new Date();
    result.setDate(result.getDate() + days);
    return result;
    Windows Store app code that uses sas token:
    public async Task TestSasApi()
    try
    var tableEndPoint = "https://myAccount.table.core.windows.net";
    var sasToken = await this.MobileService.InvokeApiAsync<Azure.StorageSas>("getsastoken", System.Net.Http.HttpMethod.Get, null);
    StorageCredentials storageCredentials = new StorageCredentials(sasToken);
    CloudTableClient tableClient = new CloudTableClient(new Uri(tableEndPoint), storageCredentials);
    var tableRef = tableClient.GetTableReference("myTableName");
    TableQuery query
    = new TableQuery().Where(TableQuery.GenerateFilterCondition("PartitionKey",
    QueryComparisons.Equal,
    this.MobileService.CurrentUser.UserId));
    TableQuerySegment seg = await tableRef.ExecuteQuerySegmentedAsync(query, null);
    foreach (DynamicTableEntity ent in seg)
    string str = ent.ToString();
    catch (Exception ex)
    string msg = ex.Message;
    Exception:
    Any help is appreciated.
    Thanks in advance!
    Thanks, Vinod Shinde

    Hi Mekh,
    Thanks for the links. I checked them and mostly they are due to date time on client and server.
    But this is not the case in this scenario.
    here is the Request and Response from Fiddler.
    Request:
    GET
    https://myaccount.table.core.windows.net/Folders?se=2014-09-13T02%3A33%3A26Z&sp=rw&spk=MicrosoftAccount%3A005d92ef08ec5d83081afed1e08641d2&epk=MicrosoftAccount%3A005d92ef08ec5d83081afed1e08641d2&sv=2014-02-14&tn=Folders&sig=YIwVPHb2wRShiyE2cWXV5hHg0p4FwQOGmWBHlN3%2FRO8%3D&api-version=2014-02-14&$filter=PartitionKey%20eq%20%27MicrosoftAccount%3A005d92ef08ec5d83081afed1e08641d2%27
    HTTP/1.1
    Accept: application/atom+xml, application/xml
    Accept-Charset: UTF-8
    MaxDataServiceVersion: 2.0;NetFx
    x-ms-client-request-id: b5d9ab61-5cff-498f-94e9-437694e9256c
    User-Agent: WA-Storage/4.2.1 (Windows Runtime)
    Host: todoprime.table.core.windows.net
    Response:
    HTTP/1.1 403 Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
    Content-Length: 437
    Content-Type: application/xml
    Server: Microsoft-HTTPAPI/2.0
    x-ms-request-id: 22c0543b-0002-0049-7337-da39f4000000
    Date: Thu, 11 Sep 2014 02:33:28 GMT
    <?xml version="1.0" encoding="utf-8" standalone="yes"?>
    <error xmlns="http://schemas.microsoft.com/ado/2007/08/dataservices/metadata">
      <code>AuthenticationFailed</code>
      <message xml:lang="en-US">Server failed to authenticate the request. Make sure the value of Authorization header is formed correctly including the signature.
    RequestId:22c0543b-0002-0049-7337-da39f4000000
    Time:2014-09-11T02:33:29.6520060Z</message>
    </error>
    Do you see anything different in this request/response?
    Thanks, Vinod Shinde

Maybe you are looking for