Setting up semantic partitions ASE

Running  ASE 15.5
We have a  non-partitioned table  with 640 million rows. We are looking at setting up range semantic partitions  by create date
There are 200 + million rows for 2013 and 380 + million rows for 2014
I am thinking about setting up following partitions by create date:
Partitions 1  -  4 :  For 2015  By quarter   
Partition 5 -  For year 2014
Partition 6   -  2013 and and earlier
Add new Partitons for each new  year  ...
Only updating  current data  --  i.e. any data more than month old is no longer updated ..
Is this a viable breakdown ?
1st attempt at partitioning  ..

Actually, I would like to comment that there are some nuances with partitioning and stats to be aware of.....but as far as your question goes, a lot depends on your version of ASE.   For pre-ASE 15.7, sampling works, but the best practice taught in classes was to do a full non-sampled stats first, then do 2-3 updates with sampling, then a full non-sampled stats in cycle - so if doing update stats weekly, the first run of the month would be a full non-sampled and other weeks in the month would be sampled.    However, what this is doing is trying to help you determine if stats sampling is working similarly to a non-sampled stats by virtue of the fact that you may have performance issues the latter weeks of the month using sampled stats vs. non-sampled.   How well this works often depends on how values are added to the different columns - e.g. scattered around evenly or monotonically increasing.    I personally have found that in later revs of 15.7 (e.g. sp100+) that running stats with hashing is much faster than stats with sampling and generates more accurate stats.   I know Mark has seen some issues - not sure where/why - but then I have seen problems with just update stats generically in which we have had to delete stats before re-running update stats....so not sure if the problem was caused by starting with non-hashed stats and then trying to update with hashed stats or not (I have always started with hashed stats).
Now, there is an interesting nuance of update stats with partitioning.    Yes, you can run update stats on partition basis.......but it doesn't update table stats (problem #1) and it can also lead to stats explosion.   I am not saying don't run update stats on a partition basis - I actually encourage it  - but suggest you know what is going to happen.    For example, partitioning - especially range partitioning - works best as far as maintenance commands - when you get in the 30+ partition range - and especially in the 50-100 partition range - assuming evenly distributed partitions.  In your case, you will likely get the same effect on 2014 and 2015 partitions as they will be much smaller.  When you run update stats on each partition, (assuming the default histogram steps) you will get 20 steps PER partition.....which can mean 1000+ for the entire table (if 50 partitions).  Not necessarily a problem unless the query needs to hit all the partitions (or some significant number of them) at which point the query will need considerable proc cache to load those stats.  Sooo......when using partitions, keep in mind that you may need to increase proc cache to handle the increase use during optimization.    On the table stats perspective, what it means is that periodically you might want to run update statistics (not update index statistics) on the table.....however, in my experience this hasn't been as necessary as one would think....and might only be necessary if you see the optimizer picking a table/partition scan when you think it should be choosing an index.
In your case, you might only have 20 steps for the whonking huge partition and then 20 steps for 2014 and 20 steps for each of the 2015 quarterly partition.    You might want to run the update stats for the 2013 and before partition with a larger step count (e.g. 100) and then run it with 20 or so for the other partitions.
Using partitions the way you are doing is interesting in a different perspective.   The current data is extremely small and therefore fast access (fewer index levels) and you don't get quite the penalty for queries that span a lot of partitions - e.g. a 5 year query doesn't have to hit 20 partitions the way it would for complete quarterly partitions.   However, this assumes the scheme is:
Partition 1 = data 2+ previous years
Partition 2 = data for previous year
Partitions 3-6 = data for current year by quarter
Which means at the end (or so) of each year, you will need to merge partitions.   Whenever you merge partitions, you will then need to run update stats again.
If the scheme instead is just to have historical partitions but going forward each year will simply have data by quarter, you might want to see what the impact on queries are - especially reports on non-unique indexes where the date range spans a lot of partitions or date range is not part of the query.

Similar Messages

  • Semantic Partitioning Delta issue while load data from DSO to Cube -BW 7.3

    Hi All,
    We have created the semantic partion with the help of BADI to perform Partitions. Everthing looks good,
    first time i have loaded the data it is full update.
    Second time i init the data load. I pulled the delta  from DSO to Cube . DSO is standard where as Cube is partition with the help of semantic partition. What i can see is the records which are updated in Latest delta are shown into the report rest all are ignored by the system.
    I tried compression but still it did not worked.
    Can some has face this kind
    Thanks

    Yaseen & Soujanya,
    It is very hard to guess the problem with the amount of information you have provided.
    - What do you mean by cube is not being loaded? No records are extracted from DSO? Is the load completing with 0 records?
    - How is data loaded from DSO to InfoCube? Full? Are you first loading to PSA or not?
    - Is there data already in the InfoCube?
    - Is there change log data for DSO or did someone delete all the PSA data?
    Sincere there are so many reasons for the behavior you are witnessing, your best option is to approach your resident expert.
    Good luck.
    Sudhi Karkada
    <a href="http://main.nationalmssociety.org/site/TR/Bike/TXHBikeEvents?px=5888378&pg=personal&fr_id=10222">Biking for MS Relief</a>

  • Data load from semantic partition info-provider

    Hi,
    We are trying to load transaction data from a multiprovider which  is built on "Semantic Partitions" objects.  Now when i try to do it on semantic partitions objects then it is throwing error.  However, the same  works when i do it on individual cubes.
    Here is the log
    [Start validating transformation file]
    Validating transformation file format
    Validating options...
    Validation of options was successful.
    Validating mappings...
    Validation of mappings was successful.
    Validating conversions...
    Validation of the conversion was successful
    Creating the transformation xml file. Please wait...
    Transformation xml file has been saved successfully.
    Begin validate transformation file with data file...
    [Start test transformation file]
    Validate has successfully completed
    ValidateRecords = YES
    Error occurs when loading transaction data from other model
    Validation with data file failed
    I was wondering if anybody has implemented data load package with semantic partition infoprovider. 
    We are on
    BI- 730, SP 7,
    BPC 10.0, SP 13
    Thanks
    prat

    Hello,
    BPC provides its own process chains for loading both transaction and master data from BW InfoProviders.  As far as I know that is the only way to load data from other sources into BPC.
    Best Regards,
    Leila Lappin

  • RAID1 set shows Apple_HFS partition, although disks had GUID partition

    I am trying to build a RAID1 set for backup purposes from 2 external disks (Firewire connected). Disk 1 has one partition with GUID. Disk 2 has 2 partitions with GUID, number 1 of these partitions with the same size as disk 1 is used for the RAID set. When I build the RAID set with Disk Utility, the set gets Apple_HFS partition, and the 2 volumes have partition type "Apple_RAID". The set cannot be used as a bootable disk for an Intel Machine. Is this a bug?

    The Intel® Mac uses a different hard disk format scheme than the
    PowerPC Mac. Reformatting to GUID is said to work with FireWire
    drives, and not necessarily as good with some USB 2.0 units.
    TMO Quick Tip - Format a Drive to Boot Intel Macs:
    http://www.macobserver.com/tip/2006/08/16.1.shtml
    The Apple Partition Map and HFS+ have certain limitations in an
    Intel® Mac configuration where an external hard disk drive is set
    to boot and run a computer from a viable clone or backup scheme.
    Since I don't do all that fancy stuff, I am not experienced enough
    to further comment; mine use CCC to HFS+ formatted FW HDD
    and I make manual clones of my entire computers to partitions;
    bootable copies of everything, when I get around to doing it!
    Good luck & happy computing!

  • Setting up efficient partitions

    I just installed a new 1.5 TB Seagate HD in my desktop which has a 500Gb Seagate in the other bay that functions as my system drive.
    I also have a 500Gb LaCie d2 external FW which has numerous files.
    I want to make 3 partitions on the 1.5TB.
    One to serve as bootable backup of system.
    One to use as my FCP capture disc
    One to become what was my external drive, freeing it up for my MB Pro use.
    My question:
    Is there a best way to set up the partitions from top to bottom that will be more efficient than another?
    Thank you!

    Just make three partitions of equal size in your case. Use the first partition as the backup for your startup volume.

  • How to set up Apple Partition Map not to be a Master Boot Record

    Hi.
    In order to copy Leopard install disk to a flash drive I need format the 8 G Cruzer (on which the U3 was removed) to set up Apple Partition Map not to be a Master Boot Record to be bootable on my Quicksilver G4. I never used partitioning before, step by step please.
    W.W.

    Hi, motsteve -
    Forget booting USB on any G4
    USB booting is possible on all G4s and later models starting with the G4 (AGP) models, per -
    Article #58430 - USB Info and Benefits of Dual-Channel USB
    I think I read somewhere (couldn't find it just now) that the earlier versions of OSX are not USB bootable, so booting to USB on G4s might need to be by using OS 9.
    I've never tried USB booting. However, I have made a bootable Zip100 disk for my G4/500 (AGP), using its internal Zip drive. Did it just to see if it would work - it does. There's a fair amount of room left over on the 100MB disk, too.
    Intel-based machines are also USB bootable, per this article -
    http://support.apple.com/kb/HT1948

  • Problems setting up Semantic tech in Orac le 11G  running Linux.

    Hi Every1,
    Am a novice at this Semantic Techno stuff and am trying to overcome that. Am trying to set up an 11G semantic tech baseline on a Linux box and am having difficulties getting everything loaded. Here is where I am at:
    1.     Installed JDK V5.
    2.     Installed Oracle 11Gr2 w/spatial and partitioning.
    3.     Installed Jena V 2.6.2 (jena-2.6.2.zip) IAW Oracle Semantic Technologies Developers guide.
    4.     Installed Jena Adaptor (jena_adaptor_for_release11-2.zip) IAW Oracle Semantic Technologies Developers guide.
    5.     Installed Oracle WebLogic Server 11g Release 1(10.3.1).
    6.     Installed Java 6.1.
    7.     Installed Joseki 3.4.0 IAW Oracle Semantic Technologies Developers guide.
    8.     Created J2EE data source in WebLogic Server admin console IAW Oracle Semantic Technologies Developers guide.
    9.     Completed the autodeploy tasks IAW Oracle Semantic Technologies Developers guide.
    10.     Restarted WebLogic Server.
    11.     Attempted to verify the deployment via web browser (http://localhost.localdomain:7001/joseki) and received an Error 404—Not Found From RFC 2068 message vice the Oracle SPARQL Service Endpoint page as expected.
    Am at a loss … can anyone identify where I went wrong and how to fix this problem?

    Here are the steps I used loading Joseki.
    1. Download and Install Oracle WebLogic Server 11g Release 1 (10.3.1). For details, see http://www.oracle.com/technology/products/weblogic/.
    2. Ensure that you have Java 1.6 installed, because it is required by Joseki 3.4.0.
    3. Download Joseki 3.4.0 (joseki-3.4.0.zip) from http://sourceforge.net/projects/joseki/files/Joseki-SPARQL/.
    4. Unpack joseki-3.4.0.zip into a temporary directory. For example:
    mkdir /tmp/joseki
    cp joseki-3.4.0.zip /tmp/joseki
    cd /tmp/joseki
    unzip joseki-3.4.0.zip
    5. Ensure that you have downloaded and unzipped the Jena Adaptor for Oracle Database, as explained in Section 7.1.
    6. Create a directory named joseki.war at the same level as the jena_adaptor directory, and go to it. For example:
    mkdir /tmp/joseki.war
    cd /tmp/joseki.war
    7. Copy necessary files into the directory created in the preceding step:
    cp /tmp/jena_adaptor/joseki/* /tmp/joseki.war
    cp -rf /tmp/joseki/Joseki-3.4.0/webapps/joseki/StyleSheets /tmp/joseki.war
    8. Create directories and copy necessary files into them, as follows:
    mkdir /tmp/joseki.war/WEB-INF
    cp /tmp/jena_adaptor/web/* /tmp/joseki.war/WEB-INF
    mkdir /tmp/joseki.war/WEB-INF/lib
    cp /tmp/joseki/Joseki-3.4.0/lib/*.jar /tmp/joseki.war/WEB-INF/lib
    cp /tmp/jena_adaptor/jar/*.jar /tmp/joseki.war/WEB-INF/lib
    cp $ORACLE_HOME/md/jlib/sdordf.jar /tmp/joseki.war/WEB-INF/lib
    cp $ORACLE_HOME/jdbc/lib/ojdbc6.jar /tmp/joseki.war/WEB-INF/lib
    Note that in the last command, you can specify ojdbc6.jar instead of ojdbc5.jar if you are using JDK 6.
    9. Using the WebLogic Server Administration console, create a J2EE data source named OracleSemDS. During the data source creation, you can specify a user and password for the database schema that contains the relevant semantic data against which SPARQL queries are to be executed.
    If you need help in creating this data source, see Section 7.2.1, "Creating the
    Required Data Source Using WebLogic Server".
    7.2.1 Creating the Required Data Source Using WebLogic Server
    If you need help creating the required J2EE data source using the WebLogic Server admin console, you can follow these steps:
    1. Login to: http://<hostname>:7001/console
    2. In the Domain Structure panel, click Services.
    3. Click JDBC
    4. Click Data Services.
    5. In the Summary of JDBC Data Sources panel, click New under the Data Sources table.
    6. In the Create a New JDBC Data Source panel, enter or select the following values.
    Name: OracleSemDS
    JNDI Name: OracleSemDS
    Database Type: Oracle
    Database Driver: Oracle's Driver (Thin) For Instance Connections Versions: 9.0.1,9.2.0,10,11
    7. Click Next twice.
    8. In the Connection Properties panel, enter the appropriate values for the Database Name, Host Name, Port, Database User Name (schema that contains semantic data), Password fields.
    DATABASE NAME: orcl
    HOST NAME: orcl.localdomain
    PORT: 1521
    DATABASE USWER NAME: nciuser
    PASSWORD: <password>
    CONFIRM PASSWORD: <password>
    9. Click Next.
    10. Select (check) the target server or servers) to which you want to deploy this OracleSemDS data source.
    11. Click Finish.
    You should see a message that all changes have been activated and no restart is necessary.
    10. Go to the autodeploy directory of WebLogic Server and copy files, as follows.(For information about auto-deploying applications in development domains, see: http://download.oracle.com/docs/cd/E11035_01/wls100/deployment/autodeploy.html)
    cd <domain_name>/autodeploy
    cp -rf /tmp/joseki.war <domain_name>/autodeploy
    In the preceding example, <domain_name> is the name of a WebLogic Server domain.
    Note that while you can run a WebLogic Server domain in two different modes, development and production, only development mode allows you use the auto-deployment feature.
    11. Check the files and the directory structure, as in the following example:
    autodeploy/% ls -1R ./joseki.war/
    ./joseki.war:
    application.xml
    index.html
    joseki-config.ttl
    StyleSheets/
    update.html
    WEB-INF/
    xml-to-html.xsl
    ./joseki.war/StyleSheets:
    joseki.css
    ./joseki.war/WEB-INF:
    lib/
    web.xml
    ./joseki.war/WEB-INF/lib:
    arq-2.8.0.jar
    arq-2.8.0-tests.jar
    icu4j-3.4.4.jar
    iri-0.7.jar
    jena-2.6.2.jar
    jenatest-2.6.2.jar
    jetty-6.1.10.jar
    jetty-util-6.1.10.jar
    joseki-3.4.0.jar
    junit-4.5.jar
    log4j-1.2.12.jar
    lucene-core-2.3.1.jar
    ojdbc5.jar
    sdordfclient.jar
    sdordf.jar
    servlet-api-2.5-6.1.10.jar
    servlet-api-2.5.jar
    slf4j-api-1.5.6.jar
    slf4j-log4j12-1.5.6.jar
    stax-api-1.0.1.jar
    wstx-asl-3.2.9.jar
    xercesImpl-2.7.1.jar
    12. Start or restart WebLogic Server.
    13. Verify your deployment by using your Web browser to connect to a URL in the following format (assume that the Web application is deployed at port 7001): http://<hostname>:7001/joseki
    You should see a page titled Oracle SPARQL Service Endpoint using Joseki, and the first text box should contain an example SPARQL query.
    This is where I got the http error message instead of the Oracle SPARQL Service Endpoint using Joseki screen.

  • Semantic Partition in DSO BW7.3

    Hi experts,
    I am working with Bw7.3. I would like to know if I do a semantic DSO partition by year or plant, which option do I have if for example appears a new plant or begins a new year that I do not have in my DSO. Does it exist a automatic process or which is the best practise for manage it?
    Many thanks!
    Judith.

    The SPO wozard places an netry in table "rrkmultiprovhint".
    This entry provides a hintto the OLAP engine so it knows how the cubes are positioned.  Using the characteristis specified in the table, a quick read is done on the dimension table of each cube.  If the value is not found then no sub query is spawned on the infocube.
    Using rrkmultiprovhint enables multiple values to be in the cube, whereas when you specify a constant only one value can be read..
    True this is not quite as efficient as specifying constants but it is a lot more flexible and the read on the dimension table is quick (as opposed to scanning the fact table)

  • Setting up 46GB partition: How to make it NTFS?

    Sorry folks, if this is a simple question for WIN afficcionados: I set up a 46GB bootcamp partition to install WinXP Pro SP3. Bootcamp utility formats this automatically as FAT, but at this size, AFAIK, it must be NTFS for WinXP to install. Hence, the Win installer seems to stall at the install command where you are supposed to select "install new", "repair","quit". No reaction on hitting the enter key (=install new). I hoped the Win installer might do the formatting...
    How can I get the partition formatted as NTFS?

    Hi Hatter & Fortuny: I waited for the partition selection window in the Win installer, but could not get there. The installer stalled (what a nice linguistic coincidence: a stalling installer) or did not react to my keyboard input "enter" when asking "new", "repair" or "quit".
    I have a German keyboard and tried both, an English and a German installer CD that I burnt from a WinXP ISO image. Both stopped at the same point and did not react to my input. Shall I let it sit for longer than 15 minutes?
    By the way: I do have a printout of the bootcamp instructions in front of me and followed its procedures. But it wouldn't be MS if it did what is expected It's a PAIN.

  • Unable to set Non-FS partition type (GPT + RAID)

    I'm performing a new install, and am following the RAID wiki for a RAID1 on 2 disks.
    The wiki suggests to use the "Non-FS" partition type (DA00), rather than the "Linux RAID" partition type (FD00), to avoid potential issues down the road.
    However, the example given in the wiki is for cfdisk (MBR). I'm using GPT partitioning rather than MBR, and cgdisk only accepts the FD00 partition type, not DA00.
    Do the wiki's precautions against using FD00 only apply to MBR-partitioned drives? Or should I be setting the partition type some other way?

    A few comments:
    Four-digit (two-byte hexadecimal) partition type codes are, AFAIK, unique to my GPT fdisk (gdisk, sgdisk, and cgdisk) program and any programs that might mimic it. (The fdisk clone in busybox is one of these, IIRC.) These codes are not industry-standard; I created them just because I needed a compact way to describe partition types and to accept partition typing data from users. GPT actually uses 16-byte GUIDs as type codes, and those are very awkward, from a user interface perspective!
    GPT fdisk does not have a type code of "DA00," so any documentation that refers to such a code is either flawed or is referring to something other than my GPT fdisk. (Somebody might have a patched version of GPT fdisk that implements such a code, though.)
    AFAIK, there's no such thing as a generic "non-FS" partition type for GPT. The most complete list of GPT type codes I'm aware of is on the Wikipedia entry on GPT, and I don't see anything close to that meaning in its table.
    According to this site, which holds a good list of known MBR type codes, 0xDA is the MBR type code for "non-FS data." Given the way I create GPT fdisk type codes, that would translate to DA00 if there were a GPT equivalent. Since there is no GPT equivalent, though, DA00 remains invalid in GPT fdisk.
    Tools based on libparted, such as parted and GParted, do a terrible job at presenting partition type code data to users. I've just skimmed it, but the page to which you refer, s1ln7m4s7r, appears to set up RAID data on a partition with a GUID of EBD0A0A2-B9E5-4433-87C0-68B6B72699C7 -- "Microsoft Basic Data". That is, the RAID partition will be flagged as holding an NTFS or FAT filesystem! That's one of the worst possible ways to set up a Linux RAID, in terms of its partition type  code.
    For the most part, Linux doesn't care about type codes, on either MBR or GPT. There are a few exceptions to this rule, though. Thus, on a Linux-only system, using a bad partition type code won't have dire consequences; but on a dual-boot system, or if the disk gets moved to another computer for some reason, a bad type code choice could result in data loss. Windows might try to reformat the partition to use NTFS, for instance.
    The Linux RAID partition type code (GUID A19D880F-05FC-4D3B-A006-743F0F84911E on GPT; represented in GPT fdisk as FD00) was created to hold RAID data. Although I do recall running across advice somewhere to not use this type code for RAID data, I honestly don't recall what the reason was, but my recollection is that I was unimpressed.
    Since you didn't post a link to the page that recommended using "DA00" for RAID devices, Nairou, I can't comment on that advice in context; however, I suspect the author was confused or that the wiki went through some edits and something got mangled. Unless somebody can provide a good reason otherwise, I recommend using the RAID data type code on a partition that holds RAID data. If you want to use something else, create your own random GUID; do not use the type code for a Microsoft filesystem, especially if the computer dual-boots with Windows!

  • [SOLVED] Setting default boot partition on Macbook Pro w/ no dual boot

    I'm trying to install Arch on a new Macbook Pro 13", but I fear I've become somewhat confused by all the instructions in the wiki.
    I have no interest in dual booting, so I left the mac's existing boot and recovery partitions alone and set up a new filesystem and boot partition. I installed rEFInd on the new boot partition and ran refind-install successfully. Installing Arch seemed to go perfectly as well.
    My issue is that when I reboot the computer, it still tries to boot from the original boot partition, which fails because OS X is no longer installed. Does anyone have anyone idea how I boot from the rEFInd partition instead?
    Last edited by whitebrice (2015-05-31 04:31:21)

    Thanks for the quick reply. I tried deleting  the original boot partition, but I still had my problem. I ended up wiping all the existing partitions and trying again with gummiboot instead of rEFInd. It works now.

  • How do I set up and partition a new HD ?

    If I put a new HD into an old machine, will it just offer me the option to partition it after it powers on, or do I have to do something more technical to partition that drive. I am under the impression that having a large HD partitioned will help me to back up my OS in one of the large partitions, in case the first one fails, and I can't boot from that damaged partition. Sort of, if #1 fails, then boot from #2 or #3.
    Is this correct or no? How do I do it? What is the BEST method for having a dual boot system, or backing up my entire system to boot from an external drive ?
    Any help is greatly appreciated.

    Garth Algar (way):
    I am under the impression that having a large HD partitioned will help me to back up my OS in one of the large partitions, in case the first one fails, and I can't boot from that damaged partition. Sort of, if #1 fails, then boot from #2 or #3.
    Well, yes, but what if the entire drive fails? Then you lose your primary boot vollume as well as your backup(s). If your intention is protection of your data and being able to boot your computer and access your data in case of a failure, a better strategy would be to backup to an external HDD, or better, to clone the entire volume on your internal HDD to an external firewire HDD, or, indeed, two separate external HDDs. Many of us maintain two or three separate backup/clones which we rotate so that we can more reliably protect our data.
    cornelius

  • Large data sets and table partitioning : removing data

    Hi,
    I have to delete lines into a big table using partition.
    Someone says me it is more efficient for the performances to drop a whole partition (or subpartition if I use a composite partition) than to delete a line at the same time.
    He says me that data access (in my partition) should be very bad if I delete lines progressively (in this partition) instead of keeping the lines and deleting the whole partition when all its lines are no more used.
    What do you think about it?
    Thanks
    Sandrine

    Hi Sandrine,
    I agree with what you're being told. I'll be much more efficient to "clone" the data you want to keep from your patition somewhere (clone table, ...) and then drop the whole partition.
    Main thing is if you drop an object there's no BEFORE IMAGE stored in your UNDO structures (UNDO TS / RBS). SO you'll have way less disk I/Os.
    Hope this helps ^^

  • Large data sets and table partitioning

    I have an Oracle 9i database and I need to store huge volume of data in a table,
    so I use a composite partitioning.
    Is there a maximum size for my table to keep performances (if it's possible, I would
    like to store several Terra (1000 Mega) of data)?
    I would like to know if somebody works on a similar project
    and which tools and environment it uses (server, OS, memory
    size..).
    Thanks for your help.

    Yes, users can update data.
    I don't join data.
    This is an OLTP system.
    To estimate the size of 5 Tb, I just use the number of inserts per day and how many days I keep data in the table and I have an average size of one line (taken on an existing first version table).
    I have another question on partitioning : someone says me it is more efficient to drop a whole partition (or subpartition if I use a composite partition) than to delete a line at the same time, for the performances.
    He says me that data access (in my partition) should be very bad if I delete lines progressively (in this partition) instead of keeping the lines and deleting the whole partition when all its lines are no more used.
    What do you think about it?
    Thanks
    Sandrine

  • Why won't boot camp let me set up a partition?

    So i've recently purchased the RPG marker humble bundle and some Windows 8.1 Pro software since I heard you can run windows on a mac.
    So i downloaded bootcamp and copied the files to my hard drive as the instructions said, then went into Boot Camp Assistant to set everything up.
    However it keeps telling me "There is no USB drive connected to the system. Please insert a USB drive to continue."  and won't let me do anything beyond that.
    After spending all the money I have to get a windows system do I now seriously have to purchase a USB drive big enough to run windows from?
    Or does my computer not run bootcamp and this has all just been a huge waste of time and money on my part?

    I didn't download it to the harddrive, the instructions I was following told me to make a copy of the files.
    These are the instructions I was following as they were linked on the download page: http://support.apple.com/kb/DL1720
    In them step 3 said to "Copy the entire contents of the .zip file to the root level of a USB flash drive or hard drive that is formatted with the FAT file system". That's what I meant by that.
    So should I remove the copies?
    Also in the instructions you linked to one of the steps says "Insert an external drive into the USB port on your Mac and keep it inserted while you install Windows and Windows support software." How big would you recommend the external drive to be? I don't own one and so I would have to purchase one.
    Thanks for such a quick reply! (As you can probably tell I was freaking out quite a lot.)

Maybe you are looking for

  • SD + Sales document ?

    Hello , I have few confusion regarding sales document... Okey, first I want to know what is the difference between sales order, and sales document, or they are same stuff...table VBAK field VBELN ( is this sales order or sales document ). Then I want

  • All I want for my birthday is my jaxrpc-mapping.xml file.........;-)

    I am using JWSDP 2.0 with JDK 1.5. I have developed an ant script for building my web service files. When the script runs, the command line arguments are passed to the wscompile: wscompile -d C:\WorkSpaceForMyEclipse\WebService\WebContent\WEB-INF\cla

  • Why is messages application not responding?

    Why cant i view/send messages on my macbook pro? It is always saying "application is not respondin".

  • "Help" won't open any OSX help topics

    Lately, I've been finding that when I search in the help, I can't open any of the OSX help topics. The Help either runs and runs and does not bring up any results, or, if an OSX topic is displayed, when it is clicked on, I am promptly brought back to

  • WorkFlow Substitution

    Dear all, I have a question regarding the Substitution with work flow.  Pleaes help !! Many thanks in advance!! The scenario is as follow:  *Approver 2 is Approver 1's subsititue; Workflow will trigger an email notification to the approver(s) once th