A Ring Partition Algorithm

I'm looking for something about RPA and implementation ie. C++

I'm looking for something about RPA and
implementation ie. C++http://www.google.com/search?hl=en&lr=&safe=off&q=Ring+Partition+Algorithm+c%2B%2B&btnG=Search

Similar Messages

  • Hash partition algorithm

    If I hash partition a table on CUSTOMER_ID into say p partitions. I receive a daily batch feed of flat file transaction records that contain CUSTOMER ID. I need to split the batch of incoming source records into p parts and each part should correspond to one of the p partitions. I can do this if I am able to execute the same hash algorithm with CUSTOMER ID as parameter which will give me a number between 1 and p. Now I know the partition that oracle has asiigned this CUSTOMER ID to and therefore I can distribute the batch records amongst parallel threads with affinity between threads and oracle table partitions.
    Can anybody let me know if the hash algorithm is avalable to call? Is it available in any package?

    I hope I understood well your requirement : you want to divide the input file in 3 files corresponding to the partitions, right ?
    Since your partitioned table is based on hash algorithm nothing obvious.
    But since you are doing update only, you could have a pre-check in the database to know which partition a row is in : based on the partition key read from the input file, write 1 file for each partition accordingly. And then process with your 3 batches running on the different partitions based on their own file. It will require one full scan of the input file before processing, so I don't know how much gain you could hope from such thing though.
    Nicolas.

  • Dynamic and intelligent re-balancing of coherence partitions

    Can Coherence dynamically route more requests intelligently to better performing nodes on the same cluster? i.e. can it rebalance its partitions dynamically such that there are either more partitions or more frequently accessed partitions made to reside on the more powerful nodes of the cluster.
    Edited by: 962776 on Oct 3, 2012 10:03 AM

    962776 wrote:
    Can Coherence dynamically route more requests intelligently to better performing nodes on the same cluster? i.e. can it rebalance its partitions dynamically such that there are either more partitions or more frequently accessed partitions made to reside on the more powerful nodes of the cluster.Hi,
    this is kind of a loaded question, which is not as simple as it seems so let me step back a bit and put it into context.
    Coherence maintains one copy of each partition which serves read and write behaviour.
    It does not maintain multiple copies of the same partition even for read purposes (the redundant copies (backups) are just for high availability). This means that there is a single node which the client needs to communicate with, and does not need to maintain load information upon which it would need to decide where to route the request. You could possibly implement lagging-read behaviour to be serviced by the backup nodes yourself on top of Coherence, but it does not come out of the box, and you would need to transmit the load information to each other node which comes with an overhead. Lagging-read here meaning that you may read a not-up-to-date copy from backup nodes.
    As for write operations, Coherence depends on there being only at most one active copy for write purposes, therefore you should not expect this to change anytime soon or not so soon...
    As for your original question: can ownership be rebalanced based on the load?
    It is not possible to do it out-of-the-box. Also, the functionality would come with its own gotchas:
    - load information needs to be transmitted to the senior node (this may already be done behind the scenes)
    - it may not be so simple to reconcile decisions based on load information with decisions based on balancing and dispersing partitions so that you get to a balanced and safe distribution while also optimizing the load
    - Would you want to move partitions just based on the load, not only for availability/safety reasons? The current behaviour theoretically does something similar, if you consider the load cost being a constant one per partition. The operative word is constant.
    Once a balanced and safe distribution is reached, Coherence does not want to move partitions around.
    Why is this important? Because you cannot process operations while a partition is on the move.
    If you wanted the load to be actually variable instead of constant 1, then it would possibly lead to much more frequent partition movements which would add additional load to the network and increase message latencies in general and in particular hinder responsiveness of the partitions on the move, which according to your request would likely be those which are already the most heavily loaded. That would possibly be counterproductive.
    So yes, theoretically it is possible, you could implement it with 3.7.1+ by writing your own implementation of PartitionAssignmentStrategy. But is it practical? It really depends on your exact system.
    An alternative approach which could tackle the problem may be changing your key partitioning algorithm so that it balances data in the partitions more evenly, maybe that will also translate to a more even distribution of the load. You would have to try and measure it. Also, you may want to look at BroadKeyPartitioningStrategy. Of course if you have a few very hot associated keys, that would make any single partition hot and that cannot really be helped.
    Best regards,
    Robert

  • What combination of partitioning Method is better?

    Hi everybody,
    I have the following problem:
    I have a big table (more than 1000 000 000 lines).
    Create table sales (
    Id_dept varchar(1),
    sales_day date,
    day varchar(2),
    Qte number(5)
    I would partition this table. I have only 9 departments and I load data day by day. What combination off partitioning is better?
    Solution 1) RANGE partitioning by date and LIST sub partitioning by department.
    Or
    Solution 2) RANGE Partitioning by department (id_detp) and HASH sub partitioning by date.
    The first solution gives me administration tasks facility (add and drop partitions will be easy: day by day).
    The second solution seems to me better for partition pruning.
    95% of requests concern only one department like the following request:
    Select * from sales where id_dept= :x
    and sales_day in (…), and …
    In solution 2 I filter immediately 8 other department partitions.
    But as I have only 9 departments, I wonder if the range partitioning is ok for department. In other word is the range partitioning algorithm is goes well to use only one value in place of a range of values?
    Partition by RANGE (id_dept)
    Subpartition by HASH(sales_day) subpartitions 90
    (Partition part1 values less than (2),
    Partition part2 values less than (3),
    Partition part3 values less than (4),
    Partition part9 values less than (9));
    Could you give me on advice please
    Regards
    Azizollah

    Solution 2 is out of the question as you've stated it. First, you would typically range partition by a value such as a date and no need for a Hash when you have fields which lend themselves to logical segregation (sales_day and Id_dept). A hash is best for tables in which you don't have fields that provide any sort of logical separation.
    If you are partitioning, it is basically to make managing your tables easier (e.g.. partition pruning). Therefore, pruning by date and not by dept makes more sense. Without knowing exactly what you want the partitioning for or how you are using your tables, can't say do this or that. But given your table, I'd take a look at doing a composite of range (date) as the top-level scheme with list (dept) as the sub-partition.
    For documentation, refer to:
    http://download-east.oracle.com/docs/cd/B19306_01/server.102/b14223/parpart.htm#g1020112

  • Cache of a Filesystem

    I would like to use Coherence as a fast front end for accessing files on a filesystem. After looking at how Coherence works, I think I need to write a "FilesystemCacheStore" that implements the CacheStore interface. So when I write items to the cache, they eventually end up on the filesystem. When I "get" items that are not in the cache, the filesystem would be used (via the FileSystemCacheStore) to get the data.
    Is this the right way to think about it? Maybe there are some things I should consider when using Coherence in this way?
    Thanks!

    Sounds doable - as long as EVERYBODY accesses the file system ONLY this way (or else you must find a way to invalidate the cached file content if it is changed "from the side"). If you "files" are large this will probably not work very well (lets say you use a near cache - each time anybody changes a file the whole file will be invalidated and must be transfered again). A "real" cached / distributed file system partitions files into blocks (like a file system on disk) to overcome this problem...
    Since disk is awfully slow (compared to memory or even network) you may need a LARGE thread pool on each node or your service threads will all get stuck awfully fast - hopefully you have a really high read/update ration or this may still become a problem...
    You may consider if a custom object to partition algorithm may improve your solution - if users for instance often want to operate on all files in a directory you may want to ensure that files in the same directory end up in the same partition. This would for instance make "ls/dir" of a directory a faster operation since it only would involve a single node... (i.e. a PartitionFilter can be used to limit the query)...
    Is the file system accessible from all the machines acting as cache nodes or do you plan to use local disk on the cache nodes? If using local disk you get very good aggregated i/o performance but must also consider what happens if a node fail (i.e. the files will not be accessible until it is back online).
    In most applications each file is updated separately from others - if this is true in your case you may consider write-behind caching... Also look at if update processors can be used to make small changes to a file without replacing the whole file in the cache...
    /Magnus
    Edited by: MagnusE on Jan 14, 2010 7:16 AM

  • My Mavericks Partition Will Not Boot due to "Incorrect Number of Thread Records", But Windows 7 On Same Drive Does

    I got up from my chair at the office to make myself coffee and came back to find my computer powered down. "That's weird," I said,  "I don't remember turning the computer off." I pressed the power button and the loading bar appeared below he apple symbol (as it should do on a bad shutdown) but alas, when the bar got to the end, there was a loading ring, and the computer immediately shut down.
    I then held down the alt key and was able to boot up in my Bootcamp Windows 7 partition (on the same hard drive) but I am unable to do a few things with my OS X Mavericks partition
    First off I have the entire partition backed up, as I was able to access all of the files and the drive using my windows partition (drive appeared in "my computer" I was able to copy paste everything just fine). Secondly when I boot in the Recovery HD, the "Macintosh HD" partition appears unmounted, I can't erase it because disk utility says that the disk could not be opened. When I try to verify it, it says that the drive needs to be repaired, but when I repair it, disk utility tells me "Error: Disk utility can't repair this disk...disk, and restore your backed-up files."
    tldr; I cannot erase the disk, I cannot repair the disk using disk utility in the Recovery HD, but I can see the disk in windows running on the same drive... so at least I have that going for me.
    I tried starting up in verbose mode and it failed after 3 tries, I attempted to use the method described here http://african-heart.blogspot.com/2010/07/fixing-invalid-node-structure-in-mac.h tml?m=1 . I have run the method 6 times and nothing has worked. Ideally I'd like to wipe the "Macintosh HD" partition and start again, but I cannot seem to do that through disk utility
    are there any methods for fixing the partition/wiping the partition that I am missing?

    The same thing is happening to me. I had just click on a form submit button and suddenly the video went dizzy for an very little instant, almost couldn't see it, the screen went black and a message warning the system crash appeared. After that, the system rebooted with a progress bar under the spinning circle and it turns my Mac off when it is fulfilled.
    Trying the native Apple hardware test right now to see if it can identify any problem.
    I'll post any news here in at most one hour, after the test finishes.

  • How do I install Leopard on the partition I've created on my MacBook Pro running Lion?

    I've partitioned my hard drive (running Lion) so that I can run Leopard. How do I install Leopard on the partition? I've put in the Leopard CD and restarted, but all I get is a black screen with a bunch of gibberish.

    There's no way the MacBookPro will boot to Leopard.
    Snow Leopard may be possible, but you'd need the original discs from Apple customer Service - the retail disc isn't a high enough version.
    To check viability of Snow Leopard, have a look at a.brody's article;
    https://discussions.apple.com/docs/DOC-2455
    If it looks like SL is feasible ring the Apple Customer Services with the model and serial no. and see if the original SL discs are available.

  • RAID1 Partitions disappeared when one of the mirrored HDD is used as an external drive for data rescue.

    Hi, I have mirrored build of two identical 1TB HDD in my Xserve Late 2006 (RAID 1) in Bay 2 and Bay3. When I took one HDD out and connected it as an external drive to my MacBook Pro, only the first partition appeared. That is, for instance, I have RAID partition A1, A2, A3 in Xserve but only A1 appears when the drive is connected to MacBook Pro as an external drive. Is there any way to let partition A2, A3 appear too? Thanks.

    Oh, I didn't realise you were using the hardware RAID card for this. In that case, yes, that makes sense, you won't see the other partitions because they're handled by the RAID card. I admit I'd never tried it with a RAID 1 mirror where you might expect to - I've only used RAID 5 on the hardware cards where there's no expectation, of course, to pull a drive to relocate the data.
    So you may ask why not just use a single partition for my RAID1 design. The reason is that I prefer to partition HDD because it is good for HDD lifespan. I don't like to design my parition schema to be just a single big parition with 1T Hard Drive since all data are going to be stored scattered and hence, taking longer for spindle reading.
    I don't agree with most of this statement. I've not seen anything that indicates partitioning improves lifespan - for any given data, the disk spins the exact same number of times, and the heads move the same number of times, regardless of the partitioning - partitioning is only a logical, not a physical, construct.
    As for the data being scattered and spindle latency, that's nowhere near the problem it used to be in the past thanks to faster spindles and larger caches. In addition, Mac OS X uses a dynamic reallocation/fragmentation system that automatically (re)allocates files as they're used, meaning you don't have the level of fragmentation that you used to have.
    The primary benefit of partitioning is to restrict file system sizes so that you don't get a runaway proceess filling up the entire disk. Even that is less of a problem - when an 8GB drive was large it didn't take much to fill it, but you'd have to have a pretty rampant process to fill up a 1TB drive before alarms started ringing.

  • I just upgraded to lion on my intel macbook.  I would like to change my facetime alert to something different than a phone ring and, I would like to be able to have full screen.  How do I do this?

    I just upgraded to lion on my intel macbook.  I would like to change my facetime alert to something different than a phone ring and, I would like to be able to have full screen.  How do I do this?

    Downgrade Lion to Snow Leopard
    1.  Boot from your Snow Leopard Installer Disc. After the installer loads select your language and click on the Continue button.  When the menu bar appears select Disk Utility from the Utilities menu.
    2. After DU loads select your hard drive (this is the entry with the mfgr.'s ID and size) from the left side list. Note the SMART status of the drive in DU's status area.  If it does not say "Verified" then the drive is failing or has failed and will need replacing.  SMART info will not be reported  on external drives. Otherwise, click on the Partition tab in the DU main window.
    3. Under the Volume Scheme heading set the number of partitions from the drop down menu to one. Set the format type to Mac OS Extended (Journaled.) Click on the Options button, set the partition scheme to GUID then click on the OK button. Click on the Partition button and wait until the process has completed.
    4. Quit DU and return to the installer. Install Snow Leopard.
    This will erase the whole drive so be sure to backup your files if you don't have a backup already. If you have performed a TM backup using Lion be aware that you cannot restore from that backup in Snow Leopard (see below.) I suggest you make a separate backup using Carbon Copy Cloner 3.4.1.
    If you have Snow Leopard Time Machine backups, do a full system restore per #14 in Time Machine - Frequently Asked Questions.  If you have subsequent backups from Lion, you can restore newer items selectively, via the "Star Wars" display, per #15 there, but be careful; some Snow Leopard apps may not work with the Lion files.

  • IMac Intel partition login problem

    Since yesterday our iMac runs very slowly and refuses to log in to the primary identity, although it will very slowly load two minor user identities. When one tries to log in on the main original user one, the screen goes blue without wavy lines and after 2 minutes or so goes back to the user identity login screen without comment. The only comment from the console was as follows:
    Jan 3 16:02:18 a-hs-computer/System/Library/CoreServices/loginwindow.app/Contents/Mac0S/loginw indow: Login Window Application Started
    Jan 3 16:02:19 a-hs-computer(243): Login Window Started Security Agent
    Mac OSX Version 10.4.1 (Build 8S2167)
    2010-01-03 +0100
    That is all!
    At 15:38 there is a crash report mdimport server crashed: 'SystemUIServer not responding', 'crashdump crashed' which repeats for 4-5 minutes incl. an 'Isregister crashed' in the middle somewhere, but is too long to repeat verbatim here.
    Checking with disk utility, the report goes as follows:
    "Checking HFS Plus volume
    Checking Extents Overflow file
    Checking catalog file
    Invalid node structure (in red)
    The volume Macintosh HD needs to be repaired
    (The program now demands I put in an administrative password but when I DO SO it stops the checking and just adds:
    Error: The underlying task reported failure on exit (in bigger red letters)
    1 HFS volume checked
    Volume needs repair.
    (But the repair button is greyed out.)
    Macs used to have a disk repair utility SOSDisk (with ambulance graphic) that one could use from the OS System disk, when starting with this. Can one repair with the iMac Intel OS CD?
    I saw our old AppleCare CD has TechTool on it. So installed it and tested the machine. The 2006 version reported that the Directory Scan had failed but the Drive Hardware had passed (so perhaps it is reparable). The Surface Scan was so slow that I skipped it once it had reported 4 errors. The final item, Volume Structure failed.
    Techtool then says TechTool Deluxe can attempt to "repair the damage" or "reduce the chance of future problems" but I find no trace of a repair tool on it. It takes one back to Apple Support to download the latest version of TechTool Deluxe (2009) but I find no trace of a useable repair tool on that one either, merely a checking system. This time the test passed the Directory scan which had failed with the earlier version, and I let the Surface Scan run slowly to its end over 2 hours, when it reported Failed with 72 errors. Volume Structure failed too.
    The newer Techtool is v. 3.1.3. As I did not put in the Apple Care identity when downloading (ours has expired) so is it possible I have a short version without the repair kit? Does anyone know Apple's policy on that? It seemed to download fine without the ApCare identity.
    Techtool instructions say one should burn the disk image of TechTool Deluxe DVD on a DVD using Disk Utility but when I get the disk image up and click on the burn button on DU, the DVD starts burning but spits the DVD back out after 30 seconds of burning. I am doing it wrong or is the DVD burn device broke. The machine will still burn CDs as I made some last night. But I have never ever burned a DVD using Disk Utility (1st tries today).
    Also, is it possible to start the machine using the Apple Care CD as system and repair with that? It seemed unlikely and this machine now takes so long to restart and really start working (2-3 hours) that I do not dare risk it yet. I use the damaged iMac still to contact you here.
    Thanks for any advice..!

    Unfortunately we are probably the only Mac in this desolate corner of North Germany where the inhabitants have always massively opted for Windows as they always go for the cheapest option. As far as I know we are the only Mac owners in a school of over 1500 people with 150 teachers. Had I been able to borrow a Mac, I would have linked it to ours long ago, got our data out and run DiskWarrior from the healthy Mac. The Hamburg techies are extremely expensive as they wrongly assume that anyone with a Mac must be eating off gold plate.
    I wd grab the opportunity to use a normal keyboard, if available. I see u are from California; that's a paradise for available Macs.
    Don't forget that we can't use the normal means of getting a disk to eject as there is no hard drive left worth the name and therefore no system operating, except whatever the system might be (unclear) on the DiskWarrior application,
    but that is totally frozen.
    Your feeling is that I want everything! Yes, another Mac, that would solve the problem. But if you mean that I need help for trifles you are quite wrong. I went through the lists of methods of ejecting disks listed on the web and by Apple before posting the request you have just replied to. And they look OS dependent to me, except perhaps the mouse button technique.
    In any case my first concern is to close that DiskWarrior application to avoid it being damaged before closing the disk, but that looks increasingly impossible.
    So far by my own efforts in this crisis I have largely solved a lot of problems worth at least 3500 dollars in techie bills had they done it. When we started by ringing the firm we bought the iMac from new, they said they could not help us over the jammed user partition problem, and merely provided the address of a data recovery service that would charge "between 2000 and 3000 Euros (3-4000 dollars)for saving the data on the HD". I refused and myself worked out how to hack into the user partition and was able to put most of the work and business data in the Documents folder onto sticks before the HD finally folded. Essentially the holiday pics in iPhoto remain still to do.
    I don't like the idea of having to sacrifice a brandnew DiskWarrior program for 108 Euros but if no one has any suggestions... Pity all this happened one month after our Apple Care 3 yr warranty ended.

  • How to implement hasing algorithm in oracle

    implement hasing algorithm in oracle

    Assuming you're using the enterprise edition of the database and that you've licensed the partitioning option, you specify the number of hash partitions you want when you create the table. The SQL Reference for the CREATE TABLE statement has the following example
    CREATE TABLE hash_products
        ( product_id          NUMBER(6)
        , product_name        VARCHAR2(50)
        , product_description VARCHAR2(2000)
        , category_id         NUMBER(2)
        , weight_class        NUMBER(1)
        , warranty_period     INTERVAL YEAR TO MONTH
        , supplier_id         NUMBER(6)
        , product_status      VARCHAR2(20)
        , list_price          NUMBER(8,2)
        , min_price           NUMBER(8,2)
        , catalog_url         VARCHAR2(50)
        , CONSTRAINT          product_status_lov_demo
                              CHECK (product_status in ('orderable'
                                                      ,'planned'
                                                      ,'under development'
                                                      ,'obsolete')
    PARTITION BY HASH (product_id)
    PARTITIONS 5
    STORE IN (tbs_01, tbs_02, tbs_03, tbs_04); You'd just specify 4 or 8 partitions, your partition key, etc.
    It doesn't make sense, however, to try to join two hash-partitioned tables on the hash key value. What is it, exactly, that you're trying to accomplish.
    Justin

  • Agent state changed to reserved but call is not ringing/landing for 30 seconds

    Hi All,
    we have IPCC 8.5, CVP 8.5, UCM 8.5, last few weeks we are facing
    agents are facing intermittently, their state changed to reserved but call is not landing/ring for a while, and we have seen call is going to RONA in CVP logs.
    We have cross checked the Device Target (4 CVP servers) its fine, Queue music is interruptable.
    In Ingress/VXML gateway we have the dial-peer pointing to two subscribers with equal priority. Seems to be some call is not routing to agent phone (delay is there between voice gateway and ip phone) due to some reason. We dont use SIP proxy we use static routing to subscribers.
    Please share your ideas.
    with Regards,
    Manivanna                  

    In Ingress/VXML gateway we have the dial-peer pointing to two subscribers with equal priority.
    The gateway should point to the Call Servers. The Call Servers should have static routes to the subscribers.
    If the call is not getting to the agent even though they go into Reserved (the Call Router has selected them), ensure that the SIP trunks to the Call Servers and the agent phones are in compatible partitions/CSS. Examine the logs on the Call Server when the INVITE is sent to the agent phone. If the INVITE returns 404 (not found) or 503 (unavailable), then the setup is wrong.
    Regards,
    Geoff

  • Outbound DID change | ring group

    1. We have CCUM 8.6, and 50 DID that are by provider tight to main number. I have one user requesting his number shows up as external line was set up on his phone. 
    I think I should go with different route pattern than 9.@ so for example I can choose 8.@ and Calling Party Transform mask set to that number supposed to be shown on customer phones. Is there any other solution than this one? Also I would like to have only one extensions ability to use 8.@ so it can not be accidentally be used be other extensions. 
    2. I have two extensions that supposed to ring if someone dials specific number 800999xxxx from outside and insight (ext 333) to ext 110 and 111.
    I need advice how to achieve this in CCUM 8.6 Thanks

    1 That one is fine, just then configure your CSS and partitions accordingly so that only he can dial that
    2 Map that DID to a line group

  • Fact and dimension table partition

    My team is implementing new data-warehouse. I would like to know that when  should we plan to do partition of fact and dimension table, before data comes in or after?

    Hi,
    It is recommended to partition Fact table (Where we will have huge data). Automate the partition so that each day it will create a new partition to hold latest data (Split the previous partition into 2). Best practice is to create partition on transaction
    timestamps so load the incremental data into a empty table called (Table_IN) and then Switch that data into main table (Table). Make sure your tables (Table and Table_IN) should be on one file group.
    Refer below content for detailed info
    Designing and Administrating Partitions in SQL Server 2012
    A popular method of better managing large and active tables and indexes is the use of partitioning. Partitioning is a feature for segregating I/O workload within
    SQL Server database so that I/O can be better balanced against available I/O subsystems while providing better user response time, lower I/O latency, and faster backups and recovery. By partitioning tables and indexes across multiple filegroups, data retrieval
    and management is much quicker because only subsets of the data are used, meanwhile ensuring that the integrity of the database as a whole remains intact.
    Tip
    Partitioning is typically used for administrative or certain I/O performance scenarios. However, partitioning can also speed up some queries by enabling
    lock escalation to a single partition, rather than to an entire table. You must allow lock escalation to move up to the partition level by setting it with either the Lock Escalation option of Database Options page in SSMS or by using the LOCK_ESCALATION option
    of the ALTER TABLE statement.
    After a table or index is partitioned, data is stored horizontally across multiple filegroups, so groups of data are mapped to individual partitions. Typical
    scenarios for partitioning include large tables that become very difficult to manage, tables that are suffering performance degradation because of excessive I/O or blocking locks, table-centric maintenance processes that exceed the available time for maintenance,
    and moving historical data from the active portion of a table to a partition with less activity.
    Partitioning tables and indexes warrants a bit of planning before putting them into production. The usual approach to partitioning a table or index follows these
    steps:
    1. Create
    the filegroup(s) and file(s) used to hold the partitions defined by the partitioning scheme.
    2. Create
    a partition function to map the rows of the table or index to specific partitions based on the values in a specified column. A very common partitioning function is based on the creation date of the record.
    3. Create
    a partitioning scheme to map the partitions of the partitioned table to the specified filegroup(s) and, thereby, to specific locations on the Windows file system.
    4. Create
    the table or index (or ALTER an existing table or index) by specifying the partition scheme as the storage location for the partitioned object.
    Although Transact-SQL commands are available to perform every step described earlier, the Create Partition Wizard makes the entire process quick and easy through
    an intuitive point-and-click interface. The next section provides an overview of using the Create Partition Wizard in SQL Server 2012, and an example later in this section shows the Transact-SQL commands.
    Leveraging the Create Partition Wizard to Create Table and Index Partitions
    The Create Partition Wizard can be used to divide data in large tables across multiple filegroups to increase performance and can be invoked by right-clicking
    any table or index, selecting Storage, and then selecting Create Partition. The first step is to identify which columns to partition by reviewing all the columns available in the Available Partitioning Columns section located on the Select a Partitioning Column
    dialog box, as displayed in Figure 3.13. This screen also includes additional options such as the following:
    Figure 3.13. Selecting a partitioning column.
    The next screen is called Select a Partition Function. This page is used for specifying the partition function where the data will be partitioned. The options
    include using an existing partition or creating a new partition. The subsequent page is called New Partition Scheme. Here a DBA will conduct a mapping of the rows selected of tables being partitioned to a desired filegroup. Either a new partition scheme should
    be used or a new one needs to be created. The final screen is used for doing the actual mapping. On the Map Partitions page, specify the partitions to be used for each partition and then enter a range for the values of the partitions. The
    ranges and settings on the grid include the following:
    Note
    By opening the Set Boundary Values dialog box, a DBA can set boundary values based on dates (for example, partition everything in a column after a specific
    date). The data types are based on dates.
    Designing table and index partitions is a DBA task that typically requires a joint effort with the database development team. The DBA must have a strong understanding
    of the database, tables, and columns to make the correct choices for partitioning. For more information on partitioning, review Books Online.
    Enhancements to Partitioning in SQL Server 2012
    SQL Server 2012 now supports as many as 15,000 partitions. When using more than 1,000 partitions, Microsoft recommends that the instance of SQL Server have at
    least 16Gb of available memory. This recommendation particularly applies to partitioned indexes, especially those that are not aligned with the base table or with the clustered index of the table. Other Data Manipulation Language statements (DML) and Data
    Definition Language statements (DDL) may also run short of memory when processing on a large number of partitions.
    Certain DBCC commands may take longer to execute when processing a large number of partitions. On the other hand, a few DBCC commands can be scoped to the partition
    level and, if so, can be used to perform their function on a subset of data in the partitioned table.
    Queries may also benefit from a new query engine enhancement called partition elimination. SQL Server uses partition enhancement automatically if it is available.
    Here’s how it works. Assume a table has four partitions, with all the data for customers whose names begin with R, S, or T in the third partition. If a query’s WHERE clause
    filters on customer name looking for ‘System%’, the query engine knows that it needs only to partition three to answer
    the request. Thus, it might greatly reduce I/O for that query. On the other hand, some queries might take longer if there are more than 1,000 partitions and the query is not able to perform partition elimination.
    Finally, SQL Server 2012 introduces some changes and improvements to the algorithms used to calculate partitioned index statistics. Primarily, SQL Server 2012
    samples rows in a partitioned index when it is created or rebuilt, rather than scanning all available rows. This may sometimes result in somewhat different query behavior compared to the same queries running on SQL Server 2012.
    Administrating Data Using Partition Switching
    Partitioning is useful to access and manage a subset of data while losing none of the integrity of the entire data set. There is one limitation, though. When
    a partition is created on an existing table, new data is added to a specific partition or to the default partition if none is specified. That means the default partition might grow unwieldy if it is left unmanaged. (This concept is similar to how a clustered
    index needs to be rebuilt from time to time to reestablish its fill factor setting.)
    Switching partitions is a fast operation because no physical movement of data takes place. Instead, only the metadata pointers to the physical data are altered.
    You can alter partitions using SQL Server Management Studio or with the ALTER TABLE...SWITCH
    Transact-SQL statement. Both options enable you to ensure partitions are
    well maintained. For example, you can transfer subsets of data between partitions, move tables between partitions, or combine partitions together. Because the ALTER TABLE...SWITCH statement
    does not actually move the data, a few prerequisites must be in place:
    • Partitions must use the same column when switching between two partitions.
    • The source and target table must exist prior to the switch and must be on the same filegroup, along with their corresponding indexes,
    index partitions, and indexed view partitions.
    • The target partition must exist prior to the switch, and it must be empty, whether adding a table to an existing partitioned table
    or moving a partition from one table to another. The same holds true when moving a partitioned table to a nonpartitioned table structure.
    • The source and target tables must have the same columns in identical order with the same names, data types, and data type attributes
    (length, precision, scale, and nullability). Computed columns must have identical syntax, as well as primary key constraints. The tables must also have the same settings for ANSI_NULLS and QUOTED_IDENTIFIER properties.
    Clustered and nonclustered indexes must be identical. ROWGUID properties
    and XML schemas must match. Finally, settings for in-row data storage must also be the same.
    • The source and target tables must have matching nullability on the partitioning column. Although both NULL and NOT
    NULL are supported, NOT
    NULL is strongly recommended.
    Likewise, the ALTER TABLE...SWITCH statement
    will not work under certain circumstances:
    • Full-text indexes, XML indexes, and old-fashioned SQL Server rules are not allowed (though CHECK constraints
    are allowed).
    • Tables in a merge replication scheme are not allowed. Tables in a transactional replication scheme are allowed with special caveats.
    Triggers are allowed on tables but must not fire during the switch.
    • Indexes on the source and target table must reside on the same partition as the tables themselves.
    • Indexed views make partition switching difficult and have a lot of extra rules about how and when they can be switched. Refer to
    the SQL Server Books Online if you want to perform partition switching on tables containing indexed views.
    • Referential integrity can impact the use of partition switching. First, foreign keys on other tables cannot reference the source
    table. If the source table holds the primary key, it cannot have a primary or foreign key relationship with the target table. If the target table holds the foreign key, it cannot have a primary or foreign key relationship with the source table.
    In summary, simple tables can easily accommodate partition switching. The more complexity a source or target table exhibits, the more likely that careful planning
    and extra work will be required to even make partition switching possible, let alone efficient.
    Here’s an example where we create a partitioned table using a previously created partition scheme, called Date_Range_PartScheme1.
    We then create a new, nonpartitioned table identical to the partitioned table residing on the same filegroup. We finish up switching the data from the partitioned table into the nonpartitioned table:
    CREATE TABLE TransactionHistory_Partn1 (Xn_Hst_ID int, Xn_Type char(10)) ON Date_Range_PartScheme1 (Xn_Hst_ID) ; GO CREATE TABLE TransactionHistory_No_Partn (Xn_Hst_ID int, Xn_Type
    char(10)) ON main_filegroup ; GO ALTER TABLE TransactionHistory_Partn1 SWITCH partition1 TO TransactionHistory_No_Partn; GO
    The next section shows how to use a more sophisticated, but very popular, approach to partition switching called a sliding
    window partition.
    Example and Best Practices for Managing Sliding Window Partitions
    Assume that our AdventureWorks business is booming. The sales staff, and by extension the AdventureWorks2012 database, is very busy. We noticed over time that
    the TransactionHistory table is very active as sales transactions are first entered and are still very active over their first month in the database. But the older the transactions are, the less activity they see. Consequently, we’d like to automatically group
    transactions into four partitions per year, basically containing one quarter of the year’s data each, in a rolling partitioning. Any transaction older than one year will be purged or archived.
    The answer to a scenario like the preceding one is called a sliding window partition because
    we are constantly loading new data in and sliding old data over, eventually to be purged or archived. Before you begin, you must choose either a LEFT partition function window or a RIGHT partition function window:
    1. How
    data is handled varies according to the choice of LEFT or RIGHT partition function window:
    • With a LEFT strategy, partition1 holds the oldest data (Q4 data), partition2 holds data that is 6- to 9-months old (Q3), partition3
    holds data that is 3- to 6-months old (Q2), and partition4 holds recent data less than 3-months old.
    • With a RIGHT strategy, partition4 holds the holds data (Q4), partition3 holds Q3 data, partition2 holds Q2 data, and partition1
    holds recent data.
    • Following the best practice, make sure there are empty partitions on both the leading edge (partition0) and trailing edge (partition5)
    of the partition.
    • RIGHT range functions usually make more sense to most people because it is natural for most people to to start ranges at their lowest
    value and work upward from there.
    2. Assuming
    that a RIGHT partition function windows is used, we first use the SPLIT subclause of the ALTER PARTITION FUNCTIONstatement
    to split empty partition5 into two empty partitions, 5 and 6.
    3. We
    use the SWITCH subclause
    of ALTER TABLE to
    switch out partition4 to a staging table for archiving or simply to drop and purge the data. Partition4 is now empty.
    4. We
    can then use MERGE to
    combine the empty partitions 4 and 5, so that we’re back to the same number of partitions as when we started. This way, partition3 becomes the new partition4, partition2 becomes the new partition3, and partition1 becomes the new partition2.
    5. We
    can use SWITCH to
    push the new quarter’s data into the spot of partition1.
    Tip
    Use the $PARTITION system
    function to determine where a partition function places values within a range of partitions.
    Some best practices to consider for using a slide window partition include the following:
    • Load newest data into a heap, and then add indexes after the load is finished. Delete oldest data or, when working with very large
    data sets, drop the partition with the oldest data.
    • Keep an empty staging partition at the leftmost and rightmost ends of the partition range to ensure that the partitions split when
    loading in new data, and merge, after unloading old data, do not cause data movement.
    • Do not split or merge a partition already populated with data because this can cause severe locking and explosive log growth.
    • Create the load staging table in the same filegroup as the partition you are loading.
    • Create the unload staging table in the same filegroup as the partition you are deleting.
    • Don’t load a partition until its range boundary is met. For example, don’t create and load a partition meant to hold data that is
    one to two months older before the current data has aged one month. Instead, continue to allow the latest partition to accumulate data until the data is ready for a new, full partition.
    • Unload one partition at a time.
    • The ALTER TABLE...SWITCH statement
    issues a schema lock on the entire table. Keep this in mind if regular transactional activity is still going on while a table is being partitioned.
    Thanks Shiven:) If Answer is Helpful, Please Vote

  • Are there any issues and potential solutions for large scale partitioning?

    I am looking at a scenario that a careful and "optimised" design has been made for a system. However, It is still resulted in thousands of entities/tables due to the complex business requirements. The option of partitions must also be investigated due to large amount of data in each table. It could potentially result in thousands partitions on thousands tables, if not more.
    Also assume that powerful computers, such as SPARC M9000, can be employed under such a scenario.
    Keen to hear what your comments are. It will be helpful if you can back up your statements with evidence and keep in the context of this scenario.

    I did see your other thread, but kept away from it because it seemed to be getting a bit heated. Some points I did notice:
    People suggested that a design involving "thousands" of entities must be bad. This is neither true not nor unusual. An EBS database may have fifty to a hundred thousand entities, no problem. It is not good or bad, just necessary.
    The discussion of "how many partitions" got stuck on whether Oracle really can support thousand of partitions per table. Of course it can - though you may find case studies that if you go over twenty or thirty thousand for a table, performance may degrade (shared pool issues, if I remember correctly).
    There was discussion of how many partitions anyone needs, with people suggesting "not many". Well, if you range partition per hour with 16 hash sub-partitions (not unreasonable in, for example, a telephone system) you have 384 per day which build up quite quickly unless you merge them.
    You own situation has never been fully defined. A few hundred million rows in a few TB is not unusual at all. But when you say "I don't have a specific problem to solve" alarm bells ring: you are trying to solve a problem that does not exist? If you get partitioning right, the benefits can be huge; get it wrong, and it can be a disaster. Don't do it just because you can.  You need to identify a problem and prove, mathematically, that your chosen partitioning strategy will fix it.
    John Watson
    Oracle Certified Master DBA

Maybe you are looking for