Partitioning on a Table - Few Questions, Confusions

Hello All,
   I've a table with around 300 Million Records. This Table has one Key Column (Auto Incremented by 1 ) and a Unique ID Field.
Primary Key (Clustered Primary Key) was created on Key Column of this Table.
Now If I want to partition this table based on UniqueID Field, SQL Server is not allowing me to do it! It throws the error something like "Partitioned Column should present in Primary Key (Clustered Index)"
Is it mandatory that the partitioned column should definitely be present in the clustered index or primary key of the table if exists.
Could someone tell me what are the prerequisites to partition a table that already have a primary key on FieldA and Unique Non Clustered Index on FieldB and I'd like to partition the table based on FieldC?
Thanks in Advance...
Unknown

Hi Roger,
I am a little confused, why do you want to partition a table based on a UniqueID? Partioning is more of a logical thing to do like dividing a table yearly, for ex the table can be divided on a year by year basis which would make more sense as the older data
can be moved over to slow storage if it's used as often as the new one based on the environment.
A UniqueID would just be a random number, I am not sure how are you planning to create a partition scheme on this and would that be effective.
Just to answer your question, it's not mandatory to have a PK column included in the partitioning.
For Ex, one of the scripts that I wrote
--CREATE PARTITION FUNCTION PF_test (datetime)
AS RANGE LEFT FOR VALUES
 ('2010-12-31 23:59:59.999',
 '2011-12-31 23:59:59.999',
 '2012-12-31 23:59:59.999',
 '2013-12-31 23:59:59.999',
 '2014-12-31 23:59:59.999',
 '2015-12-31 23:59:59.999')
GO
I would suggest that you watch the below video to have a better understanding:-
http://technet.microsoft.com/en-US/sqlserver/gg545008.aspx
Please mark the answer as helpful if i have answered your query. Thanks and Regards, Kartar Rana

Similar Messages

  • [SOLVED] A few questions about partitions

    Hi,
    I'm using Windows XP as my primary OS atm. Last year I installed Linux Mint and got it working nicely along with XP, but now I want to try Arch. I have two hard disks - a 320GB SATA2 one and a secondary 80GB IDE one. I've separated about 50GB of the largest for Mint, and I'd like to use these for Arch. Thing is, I'm not sure how I partitioned my HD (I think Mint automated most of it) and I'm scared that I'll screw up. I ran fdisk -l with the Arch CD as suggested by the Beginner's Guide, and here's more or less what I got:
    Boot Start End Blocks Id System
    /dev/sdb1 * 1 32059 257513886 7 HPFS/NTFS
    /dev/sdb2 32060 38913 55054755 5 Extended
    /dev/sdb5 32060 38627 52757428+ 83 Linux
    /dev/sdb6 38628 38913 2297263+ 82 Linux swap/solaris
    A few questions:
    -> I assume sdb1 is where Windows is, and sdb2, 5 and 6 are Mint's. Is that correct? Why are there no sdb3 and 4?
    -> If I understand it correctly, hdX means a partition in an IDE disk, and sdX means one in a SATA disk. Is that right? If so, why don't I have an hda (which would be the 80GB HD) partition, and why do I have sdbs instead of sdas?
    -> When installing Arch, should I delete Mint's partitions and make new ones, or use the ones it already created?
    -> If I decide to start using Arch as my primary OS in the future, will it be possible to resize its home partition?
    Thank you very much and sorry for my cluelessness.
    Last edited by Caio (2009-07-05 20:26:29)

    Caio wrote:
    Hi,
    I'm using Windows XP as my primary OS atm. Last year I installed Linux Mint and got it working nicely along with XP, but now I want to try Arch. I have two hard disks - a 320GB SATA2 one and a secondary 80GB IDE one. I've separated about 50GB of the largest for Mint, and I'd like to use these for Arch. Thing is, I'm not sure how I partitioned my HD (I think Mint automated most of it) and I'm scared that I'll screw up. I ran fdisk -l with the Arch CD as suggested by the Beginner's Guide, and here's more or less what I got:
    Boot Start End Blocks Id System
    /dev/sdb1 * 1 32059 257513886 7 HPFS/NTFS
    /dev/sdb2 32060 38913 55054755 5 Extended
    /dev/sdb5 32060 38627 52757428+ 83 Linux
    /dev/sdb6 38628 38913 2297263+ 82 Linux swap/solaris
    A few questions:
    Caio wrote:-> I assume sdb1 is where Windows is, and sdb2, 5 and 6 are Mint's. Is that correct? Why are there no sdb3 and 4?
    My guess is because you made the second partion extended. normally 1,2,3,4 will be primary or one will be marked extended. It probably skipped 3 and 4 since you didn't create anymore primary partitions.
    Caio wrote:-> If I understand it correctly, hdX means a partition in an IDE disk, and sdX means one in a SATA disk. Is that right? If so, why don't I have an hda (which would be the 80GB HD) partition, and why do I have sdbs instead of sdas?
    With the new libata driver they all show up as sd?, so no this rule isn't correct.
    Caio wrote:-> When installing Arch, should I delete Mint's partitions and make new ones, or use the ones it already created?
    I would reformat the partitions, but if they are how you want them then you should just leave the partition table untouched. By reformat them I mean recreate the filesystem.
    Caio wrote:
    -> If I decide to start using Arch as my primary OS in the future, will it be possible to resize its home partition?
    Thank you very much and sorry for my cluelessness.
    It looks like you do not have a separate home partition. My guess is you have your windows partition, the extended, and then that is broken into one large partition for / and then swap at the end.
    Edit: Oh I should have pointed out, normally arch creates a separate /home partition, if you want this you will have to restructure your extended partitions. This isn't necessary though, it's a preferance thing, there pros/cons to going either route.
    Last edited by Zepp (2009-07-05 14:53:58)

  • A few questions on setting partitioning the hardrive.

    Hello,
    I am reinstalling Arch because I want to make the jump from i686 to Arch64. I really want this to be the last time I reinstall an OS on this laptop, so I have a few questions on the most efficient way to set up the partitions.
    1) Is it safer to have a seperate boot partition? I really don't mind setting aside 100MB or so for boot, but I keep hearing you really don't have to do that with Arch anymore.
    2) How does optimization work? Are partitions that are created higher on the list faster to access? What I'm looking for is a really fast computer while in use. I don't care about boot time very much, and my swap drive is barely used. Would this be a good set up for what I'm looking for?
    SDA1 Root
    SDA2 VAR
    SDA3 Home
    SDA4 Swap
    SDA5 Boot
    3) Is ext4 a good idea? I really like the speed and all that, but it's still in development, right? When it's finished, would I be able to update the filesystems or once I make them are they always stuck in their current state? I was thinking ext2 for boot then ext4 for everything else.

    I'm not too sure on that one as i have never tried it but it might be possible to boot from your live cd, backup your /var partition reformat it then copy the data back on to it but don't quote me on that.
    I don't think you will notice any difference regarding which order you mount your partition's, I have been through a bunch of different partition setup's and have not noticed any speed difference between them only between different filesystem's
    I have the same partition setup as your first post minus the boot partition
    /dev/sda1 / ext4 defaults,noatime 0 1
    /dev/sda2 /var reiserfs defaults,noatime 0 1
    /dev/sda3 /home ext4 defaults,noatime 0 1
    /dev/sda4 swap swap defaults 0 0

  • A few questions about managing partitions when performing a recovery

    Hello everybody,
    I purchased an L505-13Z about to weeks ago for personal use and light audio editing work. In these two weeks I took the time to explore and test this computer's performance and found it suitable for my needs, as it withstands even havier audio work then I will need, as tested. I am really happy with this laptop! Anyway, few optimizations I did for the audio software have wiped some needed features of the computer, so I needed to install the OS again.. Basically, I was always used to have the system install only wiping the system drive, leaving the "D:Data" partition intact. I notice this isn't the case with system recovery, as after recovery, both system and data partitions were wiped. of course everything is backed up on an external drive, but every time I perform a recovery I will need to copy everything back from the backup drive? I missed the point of this - it's exactly like having all my data saved on the system partition.. What's the need for the data partition then?
    Is there a way to perform a recovery and leave the "Data" partition untouched?
    My second question is if it's possible to change the size of the partitions somehow? I think that it's a bit useless to have 230G of free space for the system partition. For my needs, 100G is enough, and I could use the remaining 130G for the Data partition, I don't know why the drive is just split in half by default... (I am aware that a partition size change must wipe ALL data on the drive).
    Thanks everybody in advance for any help!

    If you use HDD recovery procedure - http://aps2.toshiba-tro.de/kb0/HTD9102IR0000R01.htm - the structure and data on second partition will not be deleted.
    So, if you want to install OS using HDD recovery option you can move all important data from partition C to partition D.
    The whole HDD will be deleted if you use recovery DVDs for OS installation.
    You should not change partitions structure because it can have negative influence on further HDD recovery procedure. It can happen that recovering data will not be found anymore.
    Please note: second partition D is not some kind of recovery or system partition. It is normal and usual HDD partition. There is just saved Toshiba recovery image in folder called HDDRecovery.
    Best thing you can do, I done it too, is to enter properties for this folder and define it as invisible.
    You will not be irritated with this folder. It is there but you cannot see it and there is no way you can delete it somehow. Use this partition as usual. Create own folders, copy data there, simply do your usual work.
    More questions?

  • A few questions about Boot Camp: installation, performance, which Win OS?

    Hello.
    I am planning on getting a windows OS. My main motives for this are because I would like to get some PC only games (I've been eyeing that Fallout 3 Game of the Year Edition that is soon to come out) and because there is a good chance that I will need some PC only programs for my college work. I just had a few questions before I did anything. Note: I am running 10.5.8 now but getting 10.6 soon.
    1. According to wikipedia: "Its functionality relies on BIOS emulation through EFI and a partition table information synchronization mechanism between GPT and MBR combined". The only word I understood of that sentence was "emulation." I know that emulation software significantly reduces performance. Is this true for boot camp? (say i were to get the exact same game for both mac and windows and set them to the exact same performance settings, when playing on windows, would there be more lag than on OS X?)
    2. Which Windows OS should I get? Since I am just going to be using Boot Camp to run games and a few other programs, would XP be the best to get to optimize the application's performance (as opposed to Win 7)?
    3. How complex is installation? I am a decent Mac techie, but this is my first time with boot camp, and I am a Windows noobie.
    4. There seems to be a lot of talk about partitions. What exactly is a partition? I have some theories, but want to know for sure.
    Message was edited by: Tomatoes&RadioWires

    Hi,
    check out the following link, excellent advice and performance tests on gaming.
    cheers,
    Dave
    http://www.mactech.com/articles/mactech/Vol.25/25.04/VMBenchmarks/index.html

  • Proper Partitioning for a table

    Dear Netters,
    We have a table that is defined as follows:
    CREATE TABLE RECORDSINGLECHOICEVALUE
      RECORDFK        RAW(16)                        NOT   NULL,
      CHOICEFK        RAW(16)                         NOT   NULL,
      FIELDFK         RAW(16)                           NOT  NULL,
      SOURCEENTITYFK  RAW(16)                   NOT   NULL
    CONSTRAINT RDSINGLECHOICEVAL_PK PRIMARY KEY (RECORDFK, FIELDFK)In it, we store GUIDs that reference other tables in the application.
    There are generally the following types of queries that use the table:
    SELECT COUNT(DISTINCT t1.SourceEntityFk)
    FROM RECORDSINGLECHOICEVALUE t1
        INNER JOIN RECORDSINGLECHOICEVALUE t2 ON (
               t1.SourceEntityFk = t2.SourceEntityFk                    ---- they belong to the same Entity
               t1.RecordFk = t2.RecordFk                                    ---- .... AND to the same Record
               AND t2.FieldFk = {some other guid value}
    WHERE t1.FieldFk = {some guid value}                  -- always a single
       AND t1.ChoiceFk in {some list of guid values}      -- this part is optionalor
    SELECT COUNT(DISTINCT t1.SourceEntityFk)
    FROM RECORDSINGLECHOICEVALUE t1
        INNER JOIN RECORDSINGLECHOICEVALUE t2 ON (
               t1.SourceEntityFk = t2.SourceEntityFk                    ---- they belong to the same Entity
               AND t2.FieldFk = {some other guid value}
    WHERE t1.FieldFk = {some guid value}                  -- always a single
       AND t1.ChoiceFk in {some list of guid values}      -- this part is optionalThe table could be joined to itself multiple times.
    For partitioning, we used HASH partition on FieldFk (128 partitions were created), since this is a scalar that participates in 99% of the queries against the table. However, due to the nature of the data, some of the partitions are heavily skewed (one field is more prevalent than others), resulting in some partitions having < 10k rows, and others having > 200M rows.
    Would you recommend an alternative partitioning schema? Sub-partitions?
    Thank you in advance.
    --Alex                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

    >
    The table in question (and we have a few other ones very similarly defined), participates in many queries against the database. Queries can be formed in such a way that the user can pick the Field (FieldFk) and (optionally) ChoiceFks at will. This is a highly flexible user-driven query engine. Table(s) can be joined many times within the same query, resulting in sub-optimal performance.
    The goal is to come up with the schema (partitioning/indexing/any other) that will support positive user experience. The 200M rows in a single partition was an example of when things start breaking lose. In the near future, this number can grow at least 10x.
    To clear up a business case, imagine human subjects, which have genetic variants. Say, there are 100 million people in the database (EntityFk). They all have 23 chromosomes, about 20,000 protein producing genes of interest (460000 combinations), and these have genetic variations (say, 10000) of different types (types are defined as ChoiceFk).
    The query would then try to identify subjects that have a specific type of gene variation (Field = "Gene variation", Choice = "Fusion"), and are males (Field = "Gender", Choice = "Male"), and have been diagnosed with a specific disorder (Field = "Diagnosis", Choice = "Specific Disorder"), and that have a recording of treatment (Field = "Treatment", choice is NOT specified) in the database. So, the table is getting joined onto itself in a few different ways, any many times (sometimes as many as 10).
    With stats in place, with index covering on Entity + Field + Choice (in all possible combinations thereof), with hash partitioning on Field alone (keys are GUIDs, so range partitioning, while possible, is kind of counter-intuitive), performance is suffering with increasing volume.
    We are evaluating other options, for different partition keys, indexing, and anything else in between.
    Any suggestions are much appreciated.
    >
    Thanks for the additional information. From what you describe it sounds like a classic use case for more of a star-schema architecture or am I still missing something?
    To see what I am talking about take a look at my extensive reply in this thread from a year ago
    Re: How to design a fact table to keep track of active dimensions?
    Re: How to design a fact table to keep track of active dimensions?
    Posted: Mar 18, 2012 7:13 PM
    I provided example code that should give you the idea of what I mean.
    For use cases like this bitmap indexes are VERY efficient. And since you said this:
    >
    The problem is performance. Maintenance side of the house is minimal - data is loaded from an external source, once every X days, via ETL, and so that is not a concern.
    >
    you should only have to rebuild/update the bitmap indexes every X days also. The main drawback of bitmap indexes is the performance when they are updated. They are NOT appropriate for OLTP systems but for OLAP where the index updates can be done offline in batch mode (or rebuilt) that is an ideal use case.
    You can easily conduct some tests using the example code I provide in that thread link as a template.
    In my example the attributes I used were: age, beer, marital_status, softdrink, state, summer_sport.
    You would use attributes like: Gene variation, Gender, Diagnosis, Treatment.
    Bitmap indexes store a bit for NULL values also so you could use NULL to indicate NO TREATMENT.
    Your goal would be to construct a query that uses a logical combination of your attributes to specify what you are interested in. Then, as you can see by the plans I posted, Oracle will take it from there and perform bitmap index operations using ONLY the indexes. This is one sample query I provided:
    SQL> select rowid from star_fact where
      2   (state = 'CA') or (state = 'CO')
      3  and (age = 'young') and (marital_status = 'divorced')
      4  and (((summer_sport = 'baseball') and (softdrink = 'pepsi'))
      5  or ((summer_sport = 'golf') and (beer = 'coors')));Your query would use your attribute names and values. Notice also that there are no multiple joins to the same table, although there can be if necessary without preventing Oracle from using the bitmap indexes efficiently.

  • Few questions on Report programming

    Hi guys,
    I have few questions on report programming.
    1. What is the purpose of the statement REPORT zxxx. Even if i gave a name other than my report name i don't find any difference in syntax check/Functionality.
    2. What is the purpose of list headings in report program? This option will come along with the text elements and selection texts.
    3. What is the purpose of logical data base. Even if it is efficient why don't we use it frequently in our reports? Is there any limitations?
    All usefull answers will be rewarded as usuall:-)
    Thanks,
    Vinod.

    1. As u told that you dint get any syntax errors even after changing the Report Name,there are chances of getting Runtime errors.
    2. The usage of List Headings is when you create a list in a program, you can also create your own list and column headings.
    Refer this link for further info.
    http://help.sap.com/saphelp_nw70/helpdata/en/e3/960a05eb0711d194d100a0c94260a5/content.htm
    3. The Limitation of LDB  is the usage of GET statements which act similar to SELECT - ENDSELECT And also if you dont choose the proper LDB,a  single LDB may contain data retrieval from so many tables ,that it would make the entire process very slow and it is very hard to find LDBs for other modules than HR.
    Only in HR module the data is organized in such a way thatusing LDBs would be much simpler.
    Reward points if useful.

  • A few questions about Patone colors

    I have a few questions about patone colors since this is the first time I use them. I want to use them to create a letterhead and business cards in two colors.
    1)
    I do understand that the uncoated is more washed out than the coated patone colors. I heard that this is because the way paper absorbs the inkt. This is why the same inkt results in different colors on different paper (right?). My question is why is the patone uncoated black so much different than normal black (c=0 m=0 y=0 k=100) or rich black:
    When I print a normal document with cmyk, I can get pretty dark black colors. Why is it that I cannot have that dark black color with patone colors? Even text documents printed on a cheap printer can get a darker color than the Patone color. It just looks way too grey for me.
    2) For a first mockup, I want to print the patone colors as cmyk (since I put like 10 different colors on a page for fast comparison). I know that these cmyk colors differ from the patone colors and that I cannot get a 100% representation. But is there a way to convert patone to cmyk values?
    I hope that some of you can help me out with my questions.
    Thanks.

    You can get Pantone's CMYK tints in Illustrator, (Swatches Panel > Open Swatch Library > Color Books > PANTONE+ Color Bridge Coated or Uncoated) but in my view, what's the point?  If you're printing to a digital printer, just use RGB (HSB) or CMYK. Personally, I never use Pantone's CMYK so-called "equivalents."
    Pantone colors are all mixed pigmented inks, many of which fluoresce beyond the gamut limits of RGB and especially CMYK. The original Pantone Matching System (PMS) was created for the printing industry. It outlined pigmented ink formulations for each of its colors.
    Most digital printers (laser or inkjet) use CMYK. The CMYK color gamut is MUCH SMALLER than what many mixed inks, printed on either coated or uncoated papers can deliver. When you specify non-coated Pantone ink in AI, according to Pantone's conversion tables, AI tries to "approximate" what that color will look like on an uncoated sheet, using CMYK. -- In my opinion, this has little relevance to real-world conditions, and is to be avoided in most situations.
    If your project is going to be printed on a printing press with spot Pantone inks, then by all means, use Pantone colors. But don't trust the screen colors; rather get a Pantone swatch book and look at the actual inks on both coated and uncoated papers, according to the stock you will use on the press.
    With the printing industry rapidly dwindling in favor of the web and inkjet printers, Pantone has attempted to extend its relevance beyond the pull-date by publishing (in books and in software alliances, with such as Adobe) its old PMS inks, and their supposed LAB and CMYK equivalents. I say "supposed" because again, RGB monitors and CMYK inks can never be literally equivalent to many Pantone inks. But if you're going to print your project on a printing press, Pantone inks are still very relevant as "spot colors."
    I also set my AI Preferences > Appearance of Black to both Display All Blacks Accurately, and Output All Blacks Accurately. The only exception to this might be when printing on a digital printer, where there should be no registration issues.
    Rich black in AI is a screen phenomenon, unless in Prefs > Appearance of Black, you also specify "Output All Inks As Rich Black," -- something I would NEVER do if outputting for an actual printing press. I always set my blacks in AI to "Output All Blacks Acurately" when outputting for a press. If you fail to do this, then on the press you will see any minor registration problems, with C, M, and Y peeking out, especially around black type.  UGH!
    Good luck!  :+)

  • Few questions about video performance and more

    Hello there,
    I'm quite sure I want to buy MacBook as my new laptop, but one thing bothers me all the time - for these money I can get a standard PC with larger display and better graphics card.
    In order to clear some confusion, I'm going to ask you a few questions, first of all:
    1. How big is the display resolution in MacBook (13" doesn't seem to be much)?
    2. Is the integrated graphics card enough to play World of Warcraft?
    3. Is it possible to ruin the operating system like Windows?
    4. Do you receive an instalation CD of Mac OS X in case the operating system fails?
    5. Is it possible to install Linux on MacBook?
    That's it, thank you in advance .

    1. How big is the display resolution in MacBook (13"
    doesn't seem to be much)?
    For laptop it's good. I have HP 15.4'' and don't feel big difference.
    2. Is the integrated graphics card enough to play
    World of Warcraft?
    It's OK,but a bit slow.
    3. Is it possible to ruin the operating system like
    Windows?
    yes via Parrallels or Boot cam
    4. Do you receive an instalation CD of Mac OS X in
    case the operating system fails?
    Yes even two

  • Windows 7 on iMac: A few questions

    Hi,
    I am considering replacing my current Windows 7 PC with an iMac.
    Since I have lots of Windows 7 software/apps which I will still need to use, I have a few questions:
    1. What is the "best" way to enable a Windows 7 virtual instance on iMac?
         - Boot Camp
         - Parallels
         - Other?
    2. Do I need to partition the iMac HDD to enable this? If so, which tools would you recommend to do this relatively easily?
    3. Do I need a separate dedicated NTFS-formatted external HDD for the Windows 7 virtual instance?
    Thanks!
    Kevin

    Kevin Delgadillo1 wrote:
    Hi,
    I am considering replacing my current Windows 7 PC with an iMac.
    Since I have lots of Windows 7 software/apps which I will still need to use, I have a few questions:
    1. What is the "best" way to enable a Windows 7 virtual instance on iMac?
         - Boot Camp
         - Parallels
         - Other?
    2. Do I need to partition the iMac HDD to enable this? If so, which tools would you recommend to do this relatively easily?
    3. Do I need a separate dedicated NTFS-formatted external HDD for the Windows 7 virtual instance?
    Thanks!
    Kevin
    1. There is no best way. There are advantages and disadvantages to both, so each exist to accomodate your needs. If you are run graphics intensive Windows apps suchs as 3D gaming, CAD/CAM, etc then you would benefit by booting into Boot Camp, however if your apps are more office traditional then running virtualization apps such as Parallels or VM Fusion is absolutely the way to go. The advantage of Boot Camp is it's a dedicated Windows Machine, you can boot into either Boot Camp (Windows) or OS X but  you cannot do both simultaneously. If you decide on virtualization the performance is great and you can run OS X and Windows together. Many switchers (myself included) started with both OS's on their Macs but over time drop using MS Windows after they have migrated all their Windows apps to OS X based apps.
    2. If you decide on the Boot Camp option then launch Boot Camp Assistant and it will walk you through setting up the partition, if you buy virtualization apps then follow the install instructions that come with the app.
    3. No.

  • A few questions about my MacBook Pro?

    Hey guys, I have a few questions about Virtual Machines and a few other random questions.
    1. Which one is the most "complete" in its features when compared to a plain install of Win. 7?
    2. Which VM should I purchase?
    I have a 15" late 2012 MacBook Pro with a 2.6 ghz intel i7, 16gb ram and a 256gb SSD. (I was planning on running it off of an external hard drive— if that is possible)
    3. I have 206.12 GB of 255.2 GB free— should I just use boot camp instead?
    4. If I use Boot Camp, do I have to delete the data that's on my Mac currently to install Win. 7 or can I partition my current drive and keep my data?
    5. What is the "other" data that's taking up 25.51 GB of space on my SSD? (I have the same problem on my iPhone, but that's a different problem.)
    Thank you very much.

    You posted your Boot Camp questions in the MacBook Pro forum. Try asking in the Boot Camp forum where the Boot Camp gurus hang out. https://discussions.apple.com/community/windows_software/boot_camp

  • Diagnostic pack, Tuning pack are not in OEM 10g, Add partition to a table

    Hi All,
    I have 2 questions:
    Q.1: In Oracle 9i Oracle Enterprise Manager java console, we had "Diagnostic Pack" and "Tuning Pack" which helped us seeing performance tuning info (session's statistics such as cpu time, pga memory, physical disk reads, etc.) and privded help for sql tuning and performance improvements. But in 10g, the same product (Oracle Enterprise Manager java console) does not include these 2 packs, due to which we are unable to monitor and control performance tuning information/statistics. Now my question is, do we need to install these 2 packs separately in 10g? if yes, where can we get them? (I am sure in 9i, these packages came with OEM console and we didnt need to get them separately)
    Q.2: I have a partitioned table having 5 partitions based on range partitioning. Now our requirements have changed and we need to insert values beyong the 5th partition, so we need a 6th partition. Can someone tell me the command to add a new partition to the table? I tried "Alter table xxx add partition yyy ....", but it didn't work. If any one can me the correct syntax, it will be great.
    Thanks in advance.

    OP is talking about java-based EM, not web-based DBConsole. In fact he/she has to change to DBConsole, because 10g java EM doesn't longer support these tuning features.
    Alter table ... partition syntax depends on the kind of partitioning, see the documentation:
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_3001.htm#i2131048
    Werner

  • Best approach to do Range partitioning on Huge tables.

    Hi All,
    I am working on 11gR2 oracle 3node RAC database. below are the db details.
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
    PL/SQL Release 11.2.0.3.0 - Production
    CORE 11.2.0.3.0 Production
    TNS for Linux: Version 11.2.0.3.0 - Production
    NLSRTL Version 11.2.0.3.0 - Production
    in my environment we have 10 big transaction (10 billion rows) tables and it is growing bigger and bigger. Now the management is planning to do a range partition based on created_dt partition key column.
    We tested this partitioning startegy with few million record in other environment with below steps.
    1. CREATE TABLE TRANSACTION_N
    PARTITION BY RANGE ("CREATED_DT")
    ( PARTITION DATA1 VALUES LESS THAN (TO_DATE(' 2012-08-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART1,
    PARTITIONDATA2 VALUES LESS THAN (TO_DATE(' 2012-09-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART2,
    PARTITION DATA3 VALUES LESS THAN (TO_DATE(' 2012-10-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART3
    as (select * from TRANSACTION where 1=2);
    2. exchange partion for data move to new partition table from old one.
    ALTER TABLE TRANSACTION_N
    EXCHANGE PARTITION DATA1
    WITH TABLE TRANSACTION
    WITHOUT VALIDATION;
    3. create required indexes (took almost 3.5 hrs with parallel 16).
    4. Rename the table names and drop the old tables.
    this took around 8 hrs for one table which has 70 millions of records, then for billions of records it will take more than 8 hrs. But the problem is we get only 2 to 3 hrs of down time in production to implement these change for all tables.
    Can you please suggest the best approach i can do, to copy that much big data from existing table to the newly created partitioned table and create required indexes.
    Thanks,
    Hari

    >
    in my environment we have 10 big transaction (10 billion rows) tables and it is growing bigger and bigger. Now the management is planning to do a range partition based on created_dt partition key column.
    We tested this partitioning startegy with few million record in other environment with below steps.
    1. CREATE TABLE TRANSACTION_N
    PARTITION BY RANGE ("CREATED_DT")
    ( PARTITION DATA1 VALUES LESS THAN (TO_DATE(' 2012-08-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART1,
    PARTITIONDATA2 VALUES LESS THAN (TO_DATE(' 2012-09-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART2,
    PARTITION DATA3 VALUES LESS THAN (TO_DATE(' 2012-10-01 00:00:00', 'YYYY-MM-DD HH24:MI:SS') ) TABLESPACE &&TXN_TAB_PART3
    as (select * from TRANSACTION where 1=2);
    2. exchange partion for data move to new partition table from old one.
    ALTER TABLE TRANSACTION_N
    EXCHANGE PARTITION DATA1
    WITH TABLE TRANSACTION
    WITHOUT VALIDATION;
    3. create required indexes (took almost 3.5 hrs with parallel 16).
    4. Rename the table names and drop the old tables.
    this took around 8 hrs for one table which has 70 millions of records, then for billions of records it will take more than 8 hrs. But the problem is we get only 2 to 3 hrs of down time in production to implement these change for all tables.
    Can you please suggest the best approach i can do, to copy that much big data from existing table to the newly created partitioned table and create required indexes.
    >
    Sorry to tell you but that test and partitioning strategy is essentially useless and won't work for you entire table anyway. One reasone is that if you use the WITHOUT VALIDATION clause you must ensure that the data being exchanged actually belongs to the partition you are putting it in. If it doesn't you won't be able to reenable or rebuild any primary key or unique constraints that exist on the table.
    See Exchanging Partitions in the VLDB and Partitioning doc
    http://docs.oracle.com/cd/E18283_01/server.112/e16541/part_admin002.htm#i1107555
    >
    When you specify WITHOUT VALIDATION for the exchange partition operation, this is normally a fast operation because it involves only data dictionary updates. However, if the table or partitioned table involved in the exchange operation has a primary key or unique constraint enabled, then the exchange operation is performed as if WITH VALIDATION were specified to maintain the integrity of the constraints.
    If you specify WITHOUT VALIDATION, then you must ensure that the data to be exchanged belongs in the partition you exchange.
    >
    Comments below are limited to working with ONE table only.
    ISSUE #1 - ALL data will have to be moved regardless of the approach used. This should be obvious since your current data is all in one segment but each partition of a partitioned table requires its own segment. So the nut of partitioning is splitting the existing data into multiple segments almost as if you were splitting it up and inserting it into multiple tables, one table for each partition.
    ISSUE#2 - You likely cannot move that much data in the 2 to 3 hours window that you have available for down time even if all you had to do was copy the existing datafiles.
    ISSUE#3 - Even if you can avoid issue #2 you likely cannot rebuild ALL of the required indexes in whatever remains of the outage windows after moving the data itself.
    ISSUE#4 - Unless you have conducted full volume performance testing in another environment prior to doing this in production you are taking on a tremendous amount of risk.
    ISSUE#5 - Unless you have fully documented the current, actual execution plans for your most critical queries in your existing system you will have great difficulty overcoming issue #4 since you won't have the requisite plan baseline to know if the new partitioning and indexing strategies are giving you the equivalent, or better, performance.
    ISSUE#6 - Things can, and will, go wrong and cause delays no matter which approach you take.
    So assuming you plan to take care of issues #4 and #5 you will probably have three viable alternatives:
    1. use DBMS_REDEFINITION to do the partitioning on-line. See the Oracle docs and this example from oracle-base for more info.
    Redefining Tables Online - http://docs.oracle.com/cd/B28359_01/server.111/b28310/tables007.htm
    Partitioning an Existing Table using DBMS_REDEFINITION
    http://www.oracle-base.com/articles/misc/partitioning-an-existing-table.php
    2. do the partitioning offline and hope that you don't exceed your outage window. Recover by continuing to use the existing table.
    3. do the partitioning offline but remove the oldest data to minimize the amount of data that has to be worked with.
    You should review all of the tables to see if you can remove older data from the current system. If you can you could use online redefinition that ignores older data. Then afterwards you can extract this old data from the old table for archiving.
    If the amount of old data is substantial you can extract the new data to a new partitioned table in parallel and not deal with the old data at all.

  • Use multiple partitions on a table in query

    Hi All,
    Overview:-
    I have a table - TRACK which is partitioned on weekly basis. Im using this table in one of my SQL queries in which I require to find a monthly count of some column data. The query looks like:-
    Select  count(*)
    from Barcode B
    inner join Track partition P(99) T
        on B.item_barcode = T.item_barcode
    where B.create_date between 20120202 and 20120209;In the above query I am fetching the count for one week using the Partitions created on that table during that week.
    Desired output:-
    I want to fetch data between 01-Feb and 01-Mar and use the rest of the partitions for that table during the duration in the above query. The weekly partitions currently present for Track table are -
    P(99) - 20120202
    P(100) - 20120209
    P(101) - 20120216
    P(102) - 20120223
    P(103) - 20120301
    My question is that above Ive used one partition successfully, now how can I use the other 4 partitions in the same query if I am finding the count for one month (i.e. from 201201 to 20120301) ?
    Environment:-
    Oracle version - Oracle 10g R2 (10.2.0.4)
    Operating System - AIX version 5
    Thanks.
    Edited by: Sandyboy036 on Mar 12, 2012 10:47 AM

    I'm with damorgan on this one, though I was lazy and only read it twice.
    You've got a mix of everything in this one and none of it is correct.
    1. If B.create_date is VARCHAR2 this is the wrong way to store dates.
    2. All Track partitions are needed for one month if you only have the 5 partitions you listed so there is no point in mentioning any of them by name. So the answer to 'how can I use the other 4 partitions' is - don't; Oracle will use them anyway.
    3. BETWEEN 01_Feb and 01-Mar - be aware that the BETWEEN operator includes both endpoints so if you actually used it in your query the data would include March 1.

  • Partitioning the fact table

    Hi Gurus,
    I have a question regarding partitioning the cube. When you partition the cube from the Extras menu, will it partition the F table or is it the E table or both.
    Second question: After partitioning, how will i know the newly created table names.
    Thanks,
    ANU

    Hi Anu,
    Partition Need and its definition
    Infocube contains huge amount of data n the table size of the cube increases regularly. so
    when a query is executed on cube then it has to check entire table to get the records for
    example Sales in Jan08.
    Advantage of Partition
    so we can partition the cube so that smaller tables are formed .so that report performance
    will increase bcoz the query hits that particular partition.
    Steps for Partition
    1.To partiotion a cube, it must not contain data.
    2. Partition can be done using time characteristics 0CALMONTH and fiscalperiod.
    steps:
    1. in change of the cube, select extras menu and Partioning option.
    2. select calmonth time characteristic.
    3. it will ask for the time period to partiotion and no. of partiotions. give them
    4. activate the cube.
    In BI 7 we can partition the cube even if it contains data.
    select the cube, right click , select repartitioning.
    1. we can delete existing partitions
    2. create new ones
    3. merge partitions.
    Partitioning of the Cube
    http://help.sap.com/saphelp_nw04s/helpdata/en/0a/cd6e3a30aac013e10000000a114084/frameset.htm
    Partitioning Fields and Values
    partition of Infocube
    Partitioning of cube using Fiscal period
    Infocube Partition
    After Partition
    You can find the Partition in the following tables in SE11 >
    E tables /BIC/E* or /BIC/E(cube name)
    Please also go through the following links
    Partioning of Cube
    partioning
    Partioning of ODS object
    /thread/733456 [original link is broken]
    Hope i had answered your question
    Assign points if helpful,
    Thanks and regards
    Bala

Maybe you are looking for

  • Problem with java stored procs.

    Is there a restriction on accessing XML documents from Java stored procs? I have a Java program to parse XML document and return a string. If I run this program as a stand alone, the program runs fine. I loaded the oracle xmlparser.jar into ORACLE8i

  • TS4180 Issues correcting photo orientation using Windows Photo Gallery

    Photos downloaded from iPhone 4 to PC running Windows 7 can't be saved after editing (ie. rotation).  Message:  "This photo can't be saved because of a problem with the photo's file properties."  When I close that dialog box, another comes up with th

  • Question: one weblogic server listening on several port

    can i start one weblogic server that listening on several port, one for different application? for example, 7001 for general user, and 7005 for admistrators and ask for two way authentification? can i do this? or do i have to start two weblogic insta

  • Edit and save/send document

    I have been sent a PDF document which I have now edited to include my signature. How do I save/send back?

  • Internet Sharing Help

    Here is my dilemma. I have HughesNet which has a very slow connection and limited data. (Ethernet) I also have my tethered iPhone which has a surprisingly fast connection and very limited data. (USB) What I want to do is use my HughesNet connection o