Partition by hour

Hello,
Is there an automatic partition method that can have a partition for one hour.
and another partition for the other times?
example.
table
ID NUMBER
DD DATE
partition p1 when DD between sysdate and sysdate-1/24
partition p2 for other values
tnx

No there isn't such a method.
It just not possible to define a partition like your p1 partition because sysdate is not a constant value. Oracle would have to check every second if any row has to be move from p1 to p2 and this definitely doesn't make any sense.

Similar Messages

  • PARTITION NEWBIE: partition by hour?

    All,
    My developers are wanting to find a way to partition their data based on the date the record was inserted into the database (stored in a TIMESTAMP datatype), but they want to be able to partition by the HOUR OF DAY the record came in. For example everything that comes in between midnight and 00:59 in one bucket, from 1am - 1:59 am in another bucket, etc. This data is very dynamic and is deleted every 24 hours. Is this possible? Oracle 9.2.0.4

    If the goal is to improve the performance of the purge process, partitioning can definitely help, since you can drop a partition almost instantly (and assuming you don't have any global indexes that would have to be rebuilt).
    What sort of queries are being done? Partitioning is generally useless to harmful for single-row queries which are looking up rows via keys. Partitioning is helpful if you are issuing queries that return (or process) largish numbers of rows if all those largish rows are located in a single partition. Queries that process largish numbers of rows that are spread across multiple partitions are generally slower than the same query on a nonpartitioned table. If you have reports that need to process every row from a particular hour, for example, partitioning by hour would be great. If you're trying to find and update a single row based on a key, partitioning is rather useless.
    Justin

  • Interval Partition By Hour

    I want to interval partition this table every 4 hours. Does this seem right?
    CREATE TABLE "BDub122"."EVENT"
    (     "EVENT_ID" NUMBER(20,0),
         "END_TIME" TIMESTAMP (3)
    ) PCTFREE 10 PCTUSED 40 INITRANS 1
    STORAGE(
    BUFFER_POOL DEFAULT
    FLASH_CACHE DEFAULT
    CELL_FLASH_CACHE DEFAULT)
    TABLESPACE "EVENT_DATA"
    PARTITION BY RANGE ("END_TIME")
    INTERVAL(NUMTODSINTERVAL(4, 'HOUR')) -- Partition every 4 hours
    (PARTITION "2013-03-19 00:00:00" VALUES LESS THAN (TO_DATE
    ('2013-03-19 00', 'YYYY-MM-DD hh24', 'NLS_CALENDAR=GREGORIAN'))
    SEGMENT CREATION IMMEDIATE
    PCTFREE 10 PCTUSED 40 INITRANS 1 NOLOGGING
    STORAGE(INITIAL 8M NEXT 1M MINEXTENTS 1 PCTINCREASE 0 FREELISTS 1
    FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT
    CELL_FLASH_CACHE DEFAULT)
    TABLESPACE "EVENT_DATA",
    PARTITION "2013-03-19 04:00:00" VALUES LESS THAN (TO_DATE
    ('2013-03-19 04', 'YYYY-MM-DD hh24', 'NLS_CALENDAR=GREGORIAN'))
    SEGMENT CREATION IMMEDIATE
    PCTFREE 10 PCTUSED 40 INITRANS 1 NOLOGGING
    STORAGE(INITIAL 8M NEXT 1M MINEXTENTS 1 PCTINCREASE 0 FREELISTS 1
    FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT
    CELL_FLASH_CACHE DEFAULT)
    TABLESPACE "EVENT_DATA"
    The table creates fine, but I want:
    1.] To verify that I have the syntax right to accomplish what I'm trying to do.
    2.] Ask if there's a better way? . . .although if the syntax is right this should work pretty well
    Thank you
    Edited by: BDub122 on Mar 19, 2013 8:33 AM

    Works for me:
    CREATE TABLE EVENT
    ( "EVENT_ID" NUMBER(20,0),
    "END_TIME" TIMESTAMP (3)
    SEGMENT CREATION IMMEDIATE
    PARTITION BY RANGE ("END_TIME")
    INTERVAL(NUMTODSINTERVAL(4, 'HOUR')) -- Partition every 4 hours
    (PARTITION "2013-03-19 00:00:00" VALUES LESS THAN (TO_DATE
    ('2013-03-19 00', 'YYYY-MM-DD hh24', 'NLS_CALENDAR=GREGORIAN')))
    insert into event values (1, sysdate - 8/24)
    insert into event values (1, sysdate - 4/24)
    insert into event values (1, sysdate)
    PARTITION_NAME
    2013-03-19 00:00:00
    SYS_P230
    SYS_P231
    SYS_P232

  • Time Machine Back Up for External Hard Drive Partition

    Hey guys,
    I have had my imac connected to an external hard drive which runs time machine for some time now (about a year and a half).
    I now wish to partition the external hard drive with a Fat 32 section so that I can load data (vidoe's, music etc.) from my windows laptop onto it.
    The external hard drive is currently not partitioned in anyway so I will have to create a new partition. Is there anyway of doing this without losing the time machine data already saved?
    If not is there anyway of saving the time machine data onto the imac so that when the partition process is complete it can simply be 're-loaded' onto the external hard drive?
    Thanks for your help!
    Philippe

    Hi, I've been trying to do that: using Disk Utility to create a partition on my EHD that already has Time Machine running on it. My EHD is 2TB and I have deleted last year's TM backups so now I have over 900GB available space on the EHD (previously was 750GB) but when I ran the Partition, after hours of processing ("modifying partition map...") - and I mean like 12 hours no kidding, it resulted in "There is no more space" error! What a waste of time!
    Is there any way to find out how much space is needed for partitioning beforehand? I wished DU could tell me that before the "modifying partition map" process begun.
    FYI I am running on 10.5.8 and the partition I want to add is to be in Mac OS Extended (Journaled) format, the same with the current format.
    Any help is appreciated. Thank you very much in advance.

  • Are there any issues and potential solutions for large scale partitioning?

    I am looking at a scenario that a careful and "optimised" design has been made for a system. However, It is still resulted in thousands of entities/tables due to the complex business requirements. The option of partitions must also be investigated due to large amount of data in each table. It could potentially result in thousands partitions on thousands tables, if not more.
    Also assume that powerful computers, such as SPARC M9000, can be employed under such a scenario.
    Keen to hear what your comments are. It will be helpful if you can back up your statements with evidence and keep in the context of this scenario.

    I did see your other thread, but kept away from it because it seemed to be getting a bit heated. Some points I did notice:
    People suggested that a design involving "thousands" of entities must be bad. This is neither true not nor unusual. An EBS database may have fifty to a hundred thousand entities, no problem. It is not good or bad, just necessary.
    The discussion of "how many partitions" got stuck on whether Oracle really can support thousand of partitions per table. Of course it can - though you may find case studies that if you go over twenty or thirty thousand for a table, performance may degrade (shared pool issues, if I remember correctly).
    There was discussion of how many partitions anyone needs, with people suggesting "not many". Well, if you range partition per hour with 16 hash sub-partitions (not unreasonable in, for example, a telephone system) you have 384 per day which build up quite quickly unless you merge them.
    You own situation has never been fully defined. A few hundred million rows in a few TB is not unusual at all. But when you say "I don't have a specific problem to solve" alarm bells ring: you are trying to solve a problem that does not exist? If you get partitioning right, the benefits can be huge; get it wrong, and it can be a disaster. Don't do it just because you can.  You need to identify a problem and prove, mathematically, that your chosen partitioning strategy will fix it.
    John Watson
    Oracle Certified Master DBA

  • Global Temp Table or Permanent Temp Tables

    I have been doing research for a few weeks and trying to comfirm theories with bench tests concerning which is more performant... GTTs or permanent temp tables. I was curious as to what others felt on this topic.
    I used FOR loops to test out the performance on inserting and at times with high number of rows the permanent temp table seemed to be much faster than the GTTs; contrary to many white papers and case studies that have read that GTTs are much faster.
    All I did was FOR loops which iterated INSERT/VALUES up to 10 million records. And for 10 mil records, the permanent temp table was over 500k milliseconds faster...
    Anyone have an useful tips or info that can help me determine which will be best in certain cases? The tables will be used for staging for ETL Batch processing into a Data Warehouse. Rows within my fact and detail tables can reach to the millions before being moved to archives. Thanks so much in advance.
    -Tim

    > Do you have any specific experiences you would like to share?
    I use both - GTTs and plain normal tables. The problem dictates the tools. :-)
    I do have an exception though that does not use GTTs and still support "restartability".
    I need to to continuously roll up (aggregate) data. Raw data collected for an hour gets aggregated into an hourly partition. Hourly partitions gets rolled up into a daily partition. Several billion rows are processed like this monthly.
    The eventual method I've implemented is a cross between materialised views and GTTs. Instead of dropping or truncating the source partition and running an insert to repopulate it with the latest aggregated data, I wrote an API that allows you to give it the name of the destination table, the name of the partition to "refresh", and a SQL (that does the aggregation - kind of like the select part of a MV).
    It creates a brand new staging table using a CTAS, inspects the partitioned table, slaps the same indexes on the staging table, and then performs a partition exchange to replace the stale contents of the partition with that of the freshly build staging table.
    No expensive delete. No truncate that results in an empty and query-useless partition for several minutes while the data is refreshed.
    And any number of these partition refreshes can run in parallel.
    Why not use a GTT? Because they cannot be used in a partition exchange. And the cost of writing data into a GTT has to be weighed against the cost of using that data by writing it (or some of it) into permanent tables. Ideally one wants to plough through a data set once.
    Oracle has a fairly rich feature set - and these can be employed in all kinds of ways to get the job done.

  • Error while compiling a view

    Hi Gurus ,
    Your help is greatly appreciated .
    Am trying to create the below view and getting an error  -- [Error] Execution (26: 116): ORA-00911: invalid character
    please let me know if am missing something here .
    CREATE OR REPLACE VIEW PROD.TERM_DIAL_ACCESS_PHNE_NUMS
    (TERMINAL_ID,
      TERM_DIAL_ACCESS_SEQUENCE,
      HOST_ID,
      SUPPRESS_1_PREFIX,
      PHONE_PREFIX,
      PHONE_SUFFIX,
      DIAL_ACCESS_PHONE_NUMBER,
      DESCRIPTION,                
      BLOCKED_FLAG,
      BLOCK_REASON,
      DATE_BLOCKED,
      BLOCKED_USER
    AS
       SELECT TERMINAL_ID,TERM_DIAL_ACCESS_SEQUENCE,HOST_ID,SUPPRESS_1_PREFIX,PHONE_PREFIX,PHONE_SUFFIX,
               DIAL_ACCESS_PHONE_NUMBER,DESCRIPTION,BLOCKED_FLAG,BLOCK_REASON,DATE_BLOCKED,BLOCKED_USER
       FROM TABLE (CAST(PROD.TERM_RELATED_REF_DATA.TERM_DIAL_ACCESS_PHNE_NUMS AS  PROD.TERM_DIAL_ACCESS_PHNE_NUMS_TAB));
    PROD.TERM_RELATED_REF_DATA  --is a package

    Hi Frank,
    I wanted to pass along a big thanks relating to some advice and code you gave regarding using a CONNECT BY statement. I further developed it for use in a chart in APEX and parametrized an input for minutes to get a breakdown count on different time intervals. This is based on having a start and end time and counting how many whatever fall into that interval.
    Sorry if this isn't the best place to post this but didn't know how to contact you personally on the Community.
    Regards,
    Steve
    WITH
    all_mins     
    AS
    SELECT
    min_min + ((LEVEL - 1) / :P12_TIME_INTER)
    AS period
    min_min + (LEVEL  
    / :P12_TIME_INTER)
    AS next_period
    FROM
    SELECT  TRUNC(TO_DATE(:P12_DATE_PICK,'DD-MON-YY HH24:MI'))
    AS min_min
    TRUNC(TO_DATE(:P12_DATE_PICK,'DD-MON-YY HH24:MI')) + (1-1/60/24)
    AS max_min
    FROM
    DUAL
    --WHERE CSTART_TIME = :P12_DATE_PICK
    --AND TO_CHAR(CEND_TIME, 'DD-MON-YY HH24:MI') = TO_CHAR(:P12_DATE_PICK + 1-1/60/24, 'DD-MON-YY HH24:MI' )
    CONNECT BY
    LEVEL <= 1 + (:P12_TIME_INTER * (max_min - min_min))
    Select
    null link
    --HOUR_MINUTE||' - '||WORKGROUP AS HOUR_MINUTE_WORKGROUP
    --, FULL_DATE
    --, HOUR
    , HOUR_MINUTE label
    --, WORKGROUP
    , RESOURCE_COUNT value1
    --, SUM(RESOURCE_COUNT) OVER (PARTITION BY HOUR, WORKGROUP) RESOURCE_COUNT_HOUR
    FROM
    SELECT
    W.CNAME AS WORKGROUP
    TO_CHAR(a.period, 'DD-MON-YY') AS FULL_DATE
    TO_CHAR(a.period, 'HH24') as HOUR
    TO_CHAR(a.period, 'HH24:MI') as HOUR_MINUTE
    COUNT (u.ROWID)
    AS RESOURCE_COUNT
    FROM     
    all_mins
    a
    LEFT OUTER JOIN
    CSHIFT_DM
    u
    ON
    a.period
    <= u.CEND_TIME
    AND
    u.CSTART_TIME
    <  a.next_period
    INNER JOIN CWORKGROUP_DM W
    ON u.CWORKGROUP_RID = W.RID
    GROUP BY  a.period, W.CNAME
    ORDER BY  a.period, W.CNAME
    --group by HOUR_MINUTE, RESOURCE_COUNT, HOUR, WORKGROUP, FULL_DATE
    WHERE WORKGROUP = :P12_WORKGROUP
    --AND TO_DATE(FULL_DATE) = :P12_DATE_PICK
    ORDER BY HOUR_MINUTE;

  • I cant figure out what is slowing down my mbp i7 quad core. Help?!

    This is a used machine I bought off CL. Upgraded to 10.8.3 and upgraded RAM to 8 gb. Has been wrking great. Suddenly delay between input and output. I click wait then it responds.....like a pc.....same with typing or any other comand and constant pinwheeling. closed all applications cleaned all temp caches. Used onyx to do its magic. All to no avail. My hard drive is at 290 of 500 gb.....my old intell core duo 2009 w 4gb ram is whirring along running all the same software. Its HD is at the same ratio.
    Please help this is a mid 2012 15inch MBP i7 quad core. What could be slowing it down?

    You'll have to backup only user files to a external storage drive (not TimeMachine) and disconnect.
    Then hold command r keys down and reboot the machine, use Disk Uility Erase on the Macintosh HD and move the slider one spot to the right, then click erase, this will Zero Erase the entire partition, takes hours, but let it complete.
    Then quit and reinstall OS X fresh with your AppleID and password/update and install your software, then last add files back, do a scan using the free ClamXav just in case.
    If you still have problems, do a extended Hardware Check, should check the RAM and replace with the original and see if the problem goes away.

  • Are there design issues in declined payments and Family Sharing?

    In Family Sharing, the Organizer is responsible for billing.  If a purchase fails due to a billing issue the unpaid balance causes all iTunes to become inaccessible until corrected.  The Organizer is not able to identify the cause and Apple recommends "logging in with other members accounts" to resolve the unpaid balance.  Further, you must verify the billing information via the Organizer card security CVV code through the other members account,
    Is Apple advocating sharing passwords given the need to "see member purchases."  This seems odd because you are sharing the purchases.
    If purchases are billed to the Organizers account, why are they unable to correct an issue within the family?
    Is this the best Apple can offer?

    I did see your other thread, but kept away from it because it seemed to be getting a bit heated. Some points I did notice:
    People suggested that a design involving "thousands" of entities must be bad. This is neither true not nor unusual. An EBS database may have fifty to a hundred thousand entities, no problem. It is not good or bad, just necessary.
    The discussion of "how many partitions" got stuck on whether Oracle really can support thousand of partitions per table. Of course it can - though you may find case studies that if you go over twenty or thirty thousand for a table, performance may degrade (shared pool issues, if I remember correctly).
    There was discussion of how many partitions anyone needs, with people suggesting "not many". Well, if you range partition per hour with 16 hash sub-partitions (not unreasonable in, for example, a telephone system) you have 384 per day which build up quite quickly unless you merge them.
    You own situation has never been fully defined. A few hundred million rows in a few TB is not unusual at all. But when you say "I don't have a specific problem to solve" alarm bells ring: you are trying to solve a problem that does not exist? If you get partitioning right, the benefits can be huge; get it wrong, and it can be a disaster. Don't do it just because you can.  You need to identify a problem and prove, mathematically, that your chosen partitioning strategy will fix it.
    John Watson
    Oracle Certified Master DBA

  • Getting my DBs timezone in oracle 11.2.0.3

    I need the timezone of the server that the DB sits on. I don't want the offset. I need the region name. This is because some regions use daylight savings time and some do not. Plus an SA can change the time of the OS. Another company manages our servers and DBs in production and we don't have contact with them. We could use a lookup talbe and just populate it, then update it when we find out what it is in production. The problem is that I have seen cases where the timezone on servers change. Considering the lack of contact between the teams, we really need a reliable way to get the timezone out of the DB.
    We tried several ways. My list is below and I explain why this is not working.
    examples:
    sessiontimezone: this is the timezone of my server. In theory it should be the same as the DB. We cannot take the risk that this will be out of sync.
    dbtimezone: This gives the offset. Such as -5:00 for US EST. There are multiple regions that have this. Some do not use daylight savings time and some do. We would need America/New York instead.
    sessiontimezone gives the timezone setting for the client. This can be altered.
    dbtimezone just gives the offset such as -5:00
    We get data feeds from different parts of the world. We get some data based data that is local to that regions timezone. We need to partition on this field. So we need to add a field to the DB and normalize it to the time local to our DB Server.
    So if we get a record from california and the DB is on a server in US EAST, we add 3 hours. The offset won't help...
    1. a timezone that we are getting from may not be in daylight savings time. We are partitioning by hour.
    2. We would hit daylight savings time in New York before we hit it in California, so we would need to account for that in the math.
    This hourly partition is a fixed and hard requirement. We need this to be absolutely accurate.
    Here is what we tried:
    What I want (pseudo-code): “Select XXX as timezone_region_name” to return “America/New_York” or “UTC”. It may be that the timezone was not set for the database at install time, and if it were, these queuries would work.
    -- FAILED
    SELECT DBTIMEZONE FROM dual;
    --FAILED
    select systimestamp, to_char(systimestamp, 'TZR'), extract (timezone_region from systimestamp) from dual;
    --FAILED
    SELECT systimestamp
         AT TIME ZONE DBTIMEZONE "DB Time"
    FROM DUAL;
    --FAILED
    select to_char(systimestamp, 'TZR') from dual;

    Guess2 wrote:
    I dont want to modify my session. I want to know the timezone that the OS clock is set at. When you select sysdate from dual, oracle uses the OS time. I need the active 'region', not the -05:00 offset.
    I do not want to have to rely on my OS's timezone to be set correctly. This code will run off of application servers on sesparate servers. This is managed by a completely different company who I have no contact with. So I need to be able to tell by looking in the database what timezone the DB is in.Oracle DB has NO capability for determining or maintaining date, time, or timezone details; independent of the OS.
    Oracle DB relies on the OS for date, time, & timezone details;
    just as it relies upon the OS for file system operations.

  • Are the outputs (vallues) of the following queries equal?

    Are the outputs (values) of the following queries equal expained by Oracle?
    SQL> select to_char(sysdate+1/24,'yymmddhh24') from dual;
    TO_CHAR(
    09081513
    SQL> select to_char(sysdate, 'yymmddhh24miss') from dual;
    TO_CHAR(SYSD
    090815130000

    Than you for reply.
    I use it for partitioning the table automatically and automatically get the partition names and ranges (get from the SYSDATE).
    For example create a partition per hour:
    sqlCreate='ALTER TABLE message ADD PARTITION '||'p_message'||
    TO_CHAR(SYSDATE+1/24,'YYMMDDHH24')||' values less than (TO_DATE('
    ''||TO_CHAR(SYSDATE+2/24,'YY/MM/DD HH24')||''
    ',''YY/MM/DD HH24''))';Is the previous code correct?
    Please modify it.
    Thank you

  • TX lock

    I have a partitioned table (hourly partitioned). As a requirement i will have to create hourly partitions daily, this happens every morning as a scheduled job. At the same time data gets inserted every second into this table.
    Now when my create partition procedure runs (when the data is being inserted) there seems to be a huge delay. There seems to some kind of locking happening (TX) in an alter index statement code of my partition creation code.
    This is a code snippet.
    hourStr := to_char(targetDate, 'YYYYMMDD')||lpad(to_char(i),2,0);
    dateBound := to_char(to_date(hourStr,'YYYYMMDDHH24')+1/24,'yyyymmddhh24');
    -- partition name, tablespace name, index tablespace name
    parName := parPrefix || hourStr;
    tsName := dataTSPrefix || hourStr;
    idxTsName := idxTSPrefix || hourStr;
    begin
    for j in (select index_name from user_indexes where table_name = 'ACTIVITY') loop
    idxStmt := 'ALTER INDEX '||j.index_name||' MODIFY DEFAULT ATTRIBUTES TABLESPACE ' || idxTsName;
    dbms_output.put_line(idxStmt);
    execute immediate idxStmt;
    --dbms_lock.sleep(seconds => 5);
    end loop;
    ddlStmt := 'ALTER TABLE ' || AR_OWNER || '.' || AR_TAB_NAME ||
    ' ADD PARTITION ' || parName ||
    ' VALUES LESS THAN (to_date(' || '''' || dateBound || '''' || ',''YYYYMMDDHH24''))' ||
    ' TABLESPACE ' || tsName ||
    ' INITRANS ' || initrans ||
    ' STORAGE(FREELISTS ' || freelists || ')';
    dbms_output.put_line(ddlStmt);
    dbms_lock.sleep(seconds => 5);
    execute immediate ddlStmt;
    dbms_output.put_line('partition ' || parName || ' added successuflly.');
    exception
    when others then
    dbms_output.put_line('partition ' || parName || ' added failed.');
    dbms_output.put_line(SQLCODE || SQLERRM);
    end;
    end loop;
    Any pointers on this will be appreciated.
    Thanks

    Some thoughts that might help clear up a few points:
    If the SELECT is interpreted as a SELECT FOR UPDATE it might well place a lock. Locking for half an hour seems a very long time but that will depend on how long the transaction containing that SELECT lasts and in real life they can last even longer. As to performance impacts, of course it depends if something else will require those rows that are locked in the meantime. If the transaction is running by itself and there is no other activity then it may not be a problem placing a lock. If on the other hand some thing else is looking for the locked rows then they will have to wait.
    You may wish to view the transaction and see if it can be broken up into smaller components for example but without more detailed or specific information it is difficult to offer any further advice.
    If you need further help I suggest you include some more details like the following:
    1) Oracle DB version and os
    2) Where are these transactions (SELECTS) being launched from and if possible some sample SELECTS that you are launching
    3) Which queries are you using on TOAD to view these locks ?
    Hope this is some help,
    Regards

  • Partitioned snapshot with parallel option unable to refresh for 14 hours

    Platform: SLES 8, Oracle 9.2.0.4
    Master site - partitioned table (7G)
    Snapshot site - partitioned snapshot (2.3G)
    Creation of snapshot takes about 2 hours
    First fast refresh didn't finished even after 14 hours, so I killed the session.
    QUESTION:
    Any known issues in using partitioned snapshots, fast refresh and parallel option?
    Thanks
    Srdjan

    PS - have found other posts indicating that clips smaller than 2s or sometimes 5s, or "short files" can cause this. Modern style editing often uses short takes ! Good grief I cannot believe Apple. Well I deleted a half a dozen short sections and can export, but now of course the video is a ruined piiece of junk and I need to re-do the whole thing, the sound etc. which is basically taking as much time as the original. And each time I re-do it I risk again this lovely error -50 and again trying to figure out what thing bugs it via trial and error instead of a REASONABLE ERROR MESSAGE POINTING TO THE CLIP IT CAN'T PROCESS. What a mess. I HATE this iMovie application - full of BUGS BUGS BUGS which Apple will not fix obviously, since I had this product for a few years and see just hundreds of hits on Google about this error with disappointed users. Such junk I cannot believe I paid money for it and Apple does not support it with fixes !!!
    If anyone knows of a GOOD reasonably priced video editing program NOT from APPLE I am still looking for suggestions. I want to do more video in future, but obviously NOT with iMovie !!!

  • Uninstalling Windows via Boot Camp, still partitioning after roughly 1 hour

    Hi,
    This is my first post, but I'm a long time searcher. I usually find what I'm looking for, but in this case I thought best to ask. I hope I follow the correct etiquette!
    So today I thought I would have a go at installing Windows XP via Boot Camp on my MacBook Pro. To cut a long story short, I completely messed it up because I (stupidly) read SP1 instead of SP2 on the CD and essentially it didn't install anything from the boot CD's -- as you would expect.
    So after a quick panic I got back running in OS, and have now just tried to uninstall windows using Boot Camp, following the correct instructions, and after around an hour now it's status is still reading: Partitioning disk.....
    Is this to be expected, or is this wrong? Initially I only partitioned 32Gb for Windows as I didn't really want to do anything with it.
    I have done a few searches and got some mixed answers - ranging from it should be instantaneous, to it could take an hour.
    What should I do next?
    Thanks in advance.

    Well......First, I should say that although I've used Macs for many years I'm not an expert by any means. So, take my advice with caution.
    In some ways it seems like, without a full backup, you are sort of between a rock and a hard place =-( I'm not sure what will happen if you force quit the Boot Camp repartitioning. It might be o.k. and it might bugger up your hard drive and require you to reinstall everything. I'm not sure what effect "ejecting" the Windows partition would have.
    I guess at some point you're just going to have to do something. When I get to a point where I am in a situation like you are I search the Internet for as much information as I can get. If I can't find a specific answer to my problem at least I usually get some idea of what to try. Then I just go ahead and start trying things. When you get into that situation it is much less stressful if you have a complete backup. (It's really worth spending the money to buy an external hard drive and start using Time Machine to make a backup.)
    I know that my answer is not much help. Sorry about that. Maybe someone else will chime in with some more useful information. If they don't I guess that you're just going to have to take a plunge =-(

  • Disk Utility has been partitioning for about 3 hours now.

    Ok so I wanted to partition my internal hard drive for time machine. So I went into Applications and opened up Disk Utility. I then Selected my internal hard drive, adjusted the partitions to the desired sizes and clicked apply. All the options grayed out and the little striped progress bar began moving. This was about 3-4 hours ago and nothing has changed. Someone help please, I'm afraid to force quit it or turn off my mac at the cost of my hard drive.

    It's not really a progress bar just an animated blue and white "candy cane-like" bar
    If that is the case, that is not a progress indicator, so it hasn't yet started to partition and is stalled. Just stop, there will be no damage.
    The question now is why has this happened, and what can you change so that it doesn't happen on your next attempt, but goes to immediate partition, which should not take longer than 30 seconds or so.

Maybe you are looking for

  • My iphone 4 is not wanting to light up at all ,making noise when getting calls etc. what could be the problem?

    I was trying to figure out why there was a cricle with a lock image in it on the top of my phone , so i was going through settings and couldnt find anything that was out of ordinary .so then i was trying to make room for a software update.The only th

  • Problem with XML signature

    Can anybody tell me how to generate an xml signature with the base 64 transform(Transform.BASE64)? I just can't find any documentation on the web. It would be great to provide a small code exemple. Thanks Antoine

  • Error when getting port from service

    I've got a problem....I've succesfully created a ws client ..... here is the wsdl: <?xml version="1.0" encoding="UTF-8"?> <!-- Published by JAX-WS RI at http://jax-ws.dev.java.net. RI's version is Oracle JAX-WS 2.1.4. --> <!-- Generated by JAX-WS RI

  • Desktop icons blink on and off; computer won't boot

    I was recently trying to view a quick time file on a dvd when I got the message that some components may be missing to view the file. I downloaded the components from Avid and put them into my quicktime folder (Library>>>Quicktime folder). When that

  • DCNM-LAN 6.2(3) Discovery tasks failed

    Hi, After trying since several days i did not come to a solution. I add for discovery in DCNM-LAN a Task with 2 defined Switches. Now i am facing the problem that after starting the task it stucks and the Task has no more Switches in the list and i c