Re: Pointers across Partitions...For YourInformation

At 09:41 AM 12/3/97 -0500, Sivaram S Ghorakavi wrote:
Hello Folks,
This is just FYI.
My application has got a Server Partition, with a Service Object as aninteraction to a wrapper in 'C' for a DLL, and a Client Partition, which
basically access the SO on each user interaction. The most of the functions
in 'C' code expect "pointer to char/pointer to structure etc.,". My
application CRASHES by declaring the pointers and allocating the memory and
passing them from client to server partition. But if I pass the forte data
values, like TextData etc., to the SO and let that do the CASTING of the
values and passing them to the methods on the Server Partion makes my
application HAPPY.
>
Rule of Thumb: Never pass the pointers across partitions. Pass the fortedatavalues and let the other partitionon handle them.
>
Regards,
Sivaram S Ghorakavi mailto:[email protected]
International Business Corporation http://www.ibcweb.com/
Actually, this makes a lot of sense when you think about it.
First, a little about pointers. As most of you probably know, a pointer is
simply a variable (usually a long, in C++) that holds a memory address
where the actual data that you are interested resides. In other words, the
pointer "points" to the memory location of the data.
Imagine an application running on some networked computer (call it "Machine
A"). This application has a partition (call it "Partition 1") that is
using some data stored in memory location 123456. In Partition 1, a
pointer variable ("P") has the value "123456", thus P "points" to the data.
Now lets pass P to another partition ("Partition 2") running on another
computer (call this "Machine B"). Remember, P really only contains the
value "123456". When it is accessed by the code in Partition 2, the
application would actually be looking for data in memory location 123456
***on Machine B***! Clearly, this is a violation and would cause all kinds
of problems.
In this respect, Forte doesn't differentiate between partitions running on
different computers vs. partitions running on the same computer, so it
raises an exception whenever a program tries to pass pointers across any
partition boundries.
Passing objects is different, however. Most of you know that Forte uses
what is called "copy semantics" when passing objects across partition
boundries. This means that, while to the developer it may appear that an
object received from a remote partition is the "same" object as the one
that was passed by that remote partition, it is actually a copy - a
physically different object containing all of the same state information as
the original object. Thus, the "address" of an object doesn't really come
into play across partition boundries.
I'm not going to take the time to explain here how anchored objects work,
but they are really object-based, not pointer-based, as well.
Viola! Mr. Ghorakavi's rule of thumb explained!
James Urquhart [email protected]
Product Manager phone: (510) 986-3513
Forte Software, Inc. fax: (510) 869-2092

Maybe, just maybe, you can use gparted to move the excess space in your new partition which is above your existing partition size after first identifying it as swap.  Then move it to the end of the drive, leaving space for a new partition which is the same size as your existing partition.
That, maybe, will accept the dd transfer since the partition size is identical.  After all is done, delete the swap.
I hope I made sense.....

Similar Messages

  • RE: passing UserWindow across partitions

    The "partition" reference is a red-herring. What I mean is, the fact
    that you are running across partitions has nothing to do with the error
    you are receiving. Consider the error. It says either one of two
    things is going wrong. Either:
    #1) You are trying to serialize a pointer, or
    #2) You are trying to send a pointer across a partition.
    You are, most manifestly, NOT receiving an error saying "serialized data
    cannot be sent across partitions." In your case, the correct
    interpretation of your situation is error #1). That is, you are trying
    to serialize a pointer (indirectly).
    In short, I have tried to do what you are doing, and the unfortunate
    answer to your problem is: you cannot serialize a window. Since the
    window contains pointers, and pointers cannot be serialized, windows
    cannot be serialized. Sorry...
    From: [email protected]
    To: [email protected]
    Subject: passing UserWindow across partitions
    Date: Tuesday, February 25, 1997 4:43PM
    I'm trying to pass a UserWindow object as a parameter to a method on a
    service object in a separate partition. The idea is to be able to store
    the window object on a server and retrieve it (and display it, etc.)
    later
    from a client. Works great in test mode, but when I run it in
    distributed
    mode, I get the error:
    Attempt to serialize a pointer value or to send a pointer value
    between
    partitions. Values of type 'pointer' can neither be serialized nor
    sent
    from one partition to another.
    I'm guessing this has something to do with the UserWindow.window
    attribute
    having attributes that are pointers to system-specific resources. For
    what
    it's worth, it doesn't work even if I haven't opened the window yet. In
    other words, the following code causes the same error:
    tempWin: someWindowClass = new();
    myServiceObject.takeThisWindow(tempWin);
    Has anyone ever tried this kind of thing before and succeeded?

    Maybe, just maybe, you can use gparted to move the excess space in your new partition which is above your existing partition size after first identifying it as swap.  Then move it to the end of the drive, leaving space for a new partition which is the same size as your existing partition.
    That, maybe, will accept the dd transfer since the partition size is identical.  After all is done, delete the swap.
    I hope I made sense.....

  • Why does the default install make a partition for boot?

    The installer, if one lets it prepare one's hard drive, makes a partition for boot, and formats it as a second extended filesystem. I know that people debate the need for/desirability of a partition for boot, and I do not mean to revive that debate. Rather, I am interested in knowing why the Arch developers, like the developers of some other distributions, such as Gentoo, but unlike some, such as Slackware, call for a boot partition, with ext2fs file system, on a default install. Should Arch users interpret this as a recommendation? Also, why is the partition quite a bit larger (although tiny compared to the available space on hard drives these days) than seems necessary?
    Thanks.

    I've looked at a couple of books and a fair number of internet posts on this, and I must say that these responses are a good deal more concise and clear than what I have come across. I did find it interesting that the Slackware on-line manual doesn't make any effort to encourage it. On the other hand, what you guys are saying sounds sensible enough.
    I have both Arch and Gentoo running, sharing a boot partition, which I set at 50MB. I'm using 25% of it, which is why I ask about size - obviously not in terms of overall capacity, but in terms of need.
    Why not a lot less - say, even if one is considering dual boot, 20MB? And where do numbers like 32MB come from (the Arch, and indeed Gentoo, default, also referred to by T-Dawg)? Why not use round numbers?
    Does 32MB have some rational/irrational/astrological/lucky 7 connection to 64MB RAM, sort of like designating 128MB as swap instead of using numbers like 120MB or 125MB or 130MB?
    As you probably know, in Gentoo this stuff is done on the command line in fdisk, and I found it quite odd to be told in their manual to tell fdisk that the boot partition should be 32MB (assuming a single operating system) and swap should be 256MB, especially since fdisk and my hard drive took these numbers with a grain of salt and conspired to adjust them to slightly different numbers complete with decimals. Arch, on default, does the same.
    Then I find out that with two operating systems, I'm using about 13MB out of 50MB (or rather 49.9MB, as fdisk and my hard drive decreed)
    To change the subject a bit, I think that it is a good idea that the current version of Arch .08 would create, on the default track, a /home partition. There are some nice ideas in the install dialogue, including in the section on setting up a hard drive.

  • Does hash partition distribute data evenly across partitions?

    As per Oracle documentation, it is mentioned that hash partitioning uses oracle hashing algorithms to assign a hash value to each rows partitioning key and place it in the appropriate partition. And the data will be evenly distributed across the partitions. Ofcourse following following conditions :
    1. Partition count should follow 2^n logic
    2. Data in partition key column should have high cardinality.
    I have used hash partitioning in some of our application tables, but data isn't distributed evenly across partitions. To verify it, i performed a small test :
    Table script :
    Create table ch_acct_mast_hash(
    Cod_acct_no number)
    Partition by hash(cod_acct_no)
    PARTITIONS 128;
    Data population script :
    declare
    i number;
    l number;
    begin
    i := 1000000000000000;
    for l in 1 .. 100000 loop
    insert into ch_acct_mast_hash values (i);
    i := i + 1;
    end loop;
    commit;
    end;
    Row-count check :
    select count(1) from Ch_Acct_Mast_hash ; --rowcount is 100000
    Gather stats script :
    begin
    dbms_stats.gather_table_stats('C43HDEV', 'CH_ACCT_MAST_HASH');
    end;
    Data distribution check :
    Select min(num_rows), max(num_rows) from dba_tab_partitions
    where table_name = 'CH_ACCT_MAST_HASH';
    Result is :
    min(num_rows) = 700
    max(num_rows) = 853
    As per the result, it seems there is lot of skewness in data distribution across partitions. Maybe I am missing something, or something is not right.
    Can anybody help me to understand this behavior?
    Edited by: Kshitij Kasliwal on Nov 2, 2012 4:49 AM

    >
    I have used hash partitioning in some of our application tables, but data isn't distributed evenly across partitions.
    >
    All keys with the same data value will also have the same hash value and so will be in the same partition.
    So the actual hash distribution in any particular case will depend on the actual data distribution. And, as Iordan showed, the data distribution depends not only on cardinality but on the standard deviation of the key values.
    To use a shorter version of that examle consider these data samples which each have 10 values. There is a calculator here
    http://easycalculation.com/statistics/standard-deviation.php
    0,1,0,2,0,3,0,4,0,5 - total 10, distinct 6, %distinct 60, mean 1.5, stan deviation 1.9, variance 3.6 - similar to Iordan's example
    0,5,0,5,0,5,0,5,0,5 - total 10, distinct 2, %distinct 20, mean 2.5, stan dev. 2.64, variance 6.9
    5,5,5,5,5,5,5,5,5,5 - total 10, distinct 1, %distinct 10, mean 5, stan dev. 0, variance 0
    0,1,2,3,4,5,6,7,8,9 - total 10, distinct 10, %distinct 100, mean 4.5, stan dev. 3.03, variance 9.2
    The first and last examples have the highest cardinality but only the last has unique values (i.e. 100% distinct).
    Note that the first example is lower for all other attributes but that doesn't mean it would hash more evenly.
    Also note that the last example, the unique values, has the highest variance.
    So this is no single attribute that is controlling. As Iordan showed the first example has a high %distinct but all of those '0' values will hash to the same partition so even using a perfect hash the data would use 6 partitions.

  • Disk partitions for ASM

    A client of mine is installing ASM and I have concerns about their proposed configuration. I'd like to run it past this forum to see if my concerns are justified.
    This is a non-RAC 11g install on RHEL5. We have 32 disks (2 vtracks X 16) of 144GB each. This storage will be used for 2 DEV databases running on the same machine. They want two diskgroups per database-- one for the Flash Recovery Area, and another one for everything else.
    The client's SA wants to spread the IO out across all the disks, so he has made 4 partitions on each of the 32 disks: one partition for each diskgroup. So the final allocation of disks is going to look something like this:
    create diskgroup DEV1_DATA
      normal redundancy
      failgroup vtrak1_1
      disk
        'ORCL:DISK01_P1'
        'ORCL:DISK16_P1'
      failgroup vtrak2_1
      disk
        'ORCL:DISK17_P1'
        'ORCL:DISK32_P1'
    create diskgroup DEV2_DATA
      normal redundancy
      failgroup vtrak1_2
      disk
        'ORCL:DISK01_P2'
        'ORCL:DISK16_P2'
      failgroup vtrak2_2
      disk
        'ORCL:DISK17_P2'
        'ORCL:DISK32_P2'
    create diskgroup DEV1_FRA
      normal redundancy
      failgroup vtrak1_3
      disk
        'ORCL:DISK01_P3'
        'ORCL:DISK16_P3'
      failgroup vtrak2_3
      disk
        'ORCL:DISK17_P3'
        'ORCL:DISK32_P3'
    create diskgroup DEV2_FRA
      normal redundancy
      failgroup vtrak1_4
      disk
        'ORCL:DISK01_P4'
        'ORCL:DISK16_P4'
      failgroup vtrak2_4
      disk
        'ORCL:DISK17_P4'
        'ORCL:DISK32_P4'
    ;OK? So here's my problem:
    This means that two separate databases will be sharing the same physical device and that this will result in IO contention. Am I not right? Or is ASM smart enough to deal with this?
    The client says that the more disks we spread data across, the better the performance will be. But I am doubtful.
    Trouble is, I have never set up ASM in this way; I have only added whole disks to diskgroups. Would anyone care to comment on this setup? Are my fears justified?
    Thanks.

    Hello,
    This means that two separate databases will be sharing the same physical device and that this will result in IO contention. Am I not right? Or is ASM smart enough to deal with this?
    ASM is smart enough to deal with 2 database or more, but my concern should be if ASM instance is down or crashes it will take both the database down.
    The client says that the more disks we spread data across, the better the performance will be. But I am doubtful.
    Always test,  and i tend to agree with the client but that doesn't mean it wil be the case all the time. So once again test.
    Trouble is, I have never set up ASM in this way; I have only added whole disks to diskgroups. Would anyone care to comment on this setup? Are my fears justified?
    I didn't quite get this question, can you elobrate more here.
    Regards
    Edited by: OrionNet on Mar 9, 2009 4:40 PM

  • Directory Links across partitions are not allowed.

    See a lot of this in the crawl log - what is this error?
    it's the http://<portal><documents>/Forms/Allitems.aspx 
    No external is crawled in that content source
     how do I fix it ? 

    Hi,
    Based on your description, my understanding is that you got an error that "Directory Links across partitions are not allowed".
    For your issue, change the content source to crawl the individual BDC applications instead of the entire catalog:
    Here is a similar post, you can use as a reference:
    https://social.technet.microsoft.com/Forums/en-US/72a51853-50a8-4062-8d50-8b38d9ecd0d3/directory-links-across-partitions-are-not-allowedwhen-crawl-external-data-source?forum=sharepointsearch
    Here is a possible solution, refer to it:
    http://blogs.technet.com/b/speschka/archive/2013/02/04/resolving-the-directory-links-across-partitions-are-not-allowed-error-when-crawling-odata-bdc-sources.aspx
    Best Regards,
    Lisa Chen
    Forum Support
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact
    [email protected]
    Lisa Chen
    TechNet Community Support

  • Partitioning for Performance

    Hi All,
    Currently we have a STAR Schema with ORDER_FACT and ORDER_HEADER_DIM , ORDER_LINE_DIM, STORE_DIM, TIME_DIM and PRODUCT_DIM.
    We are planning to partition ORDER_FACT for performance improvements both reporting and loading. We have around 100 million rows in ORDER_FACT. Daily we inserted around 1 million rows and update around 2 million rows.
    We are trying to come up with some good stratgies and we have few questions..
    1) Our ORDER_FACT does not have any date columns except INSERT_DATE and LAST_UPDATE_DATE , more of timestamp columns. ORDER_DATE would be the appropriate one but we do not store it in fact. We have ORDER_DATE_KEY which is surrogatekey of TIME_DIM.
    Can a range partition (monthly) still be performed ? ( I quess we need a ORDER_DATE column in our fact )
    If somebody has handled this situation in some other way , any guidance will be helpful.
    2) Question below is assuming - we have a partitioned ORDER_FACT on ORDER_DATE.
    Currently we are doing a merge (Update/Insert) on ORDER_FACT. We have a incremental load (only newly inserted or updated rows from source) are processed.
    Update/Insert is slow.
    Can we use PEL (Partition Enabled loading ) and avoid merge (Update/Insert) ?
    PEL is fine for new rows , since it replaces empty partition in target with a loaded partition from source . How to handle updation and insertion of rows in partition which has existing rows?
    Any help on these would be helpful.
    Thanks,
    Samurai.

    Speaking from our experience, at some point you need to build your fact rows so you need an insert/update prior to PEL anyway, and you would need your partitions closely matched to your refresh frequency for it really to be effective.
    So what we have done is focus on the "E" part of ETL.
    Our remote source database is mirrored on our side via Streams. This mirrors into a local copy that we can run various reports/ processes/ queries against without impacting production.
    We also perform a custom aply that populates a second local copy of the tables, but these ones are partitioned daily and are used for our ETL. So, at the end of the day we have a partitioned set of data that contains only the current status of rows that have changed over the day. Now, of course, this is problematic for ETL because you need to have all of the associated information with those changes in order to do your ETL.
    (simple example, data in a customer's address record changes. Your ETL query undoubtably joins the customer record and the address record to build your customer dimension row. But Streams only propogates the changed address record so you wouldn't have the customer record in that daily partition for your join)
    So, we have a process that runs after the Streams aply is finished that walks the dependency tree and populates all dependant data into the current daily partition, so - at the end of our prep process we have a partitioned set of data that holds a complete set of source tables where anything has changed across any dependencies.
    This gives us a small, efficient daily data set to run our ETL queries against.
    The final piece of the puzzle is that we access this segment via synonyms, and the synonyms are pointed at this day's partition. We have a control structure that manages the list of partitions and repoints the synonyms prior to running the ETL. The partition loading and the ETL synonym pointing are completely decoupled so, for example, if we ever needed to suspend our ETL to get a code fix in place we can let the partition loading move ahead for a day or two and then play catchup loading the partitions in sequence and be confident that we have each end-of-day picture there to use.
    By running our ETL against only the changed data, we acheive huge efficiencies in query performance. And by managing the ETL partitions, we don't incur the space costs of a second full copy of the source as we prune out the partitions once we are satisfied with the load at the end of a month (with full backups of course in case there is ever a huge problem to go back and correct).
    Now for facts, of course, we expect these to be insert only. Facts shouldn't change. For dimensions we use set based fail over to row based (target only), with a couple specified to be Row Based Target Only as they are simply too large to ever complete in Set Based Mode.
    Yes, this is a bit of a convoluted process - exacerbated by our need to have a full local copy for some reporting needs and the partitioned change copy for the datamart ETL, but at the end of the day it all works and works well with properly designed control mechanisms.
    Cheers,
    Mike

  • Can I use One Hard Drive for 2x Time Machine Back Ups and another partition for all my data?

    Hi Forum,
    I am planning to but a RAID disc set up, but if i get a 2 disc set up, on one disc, can i have:
    Time Machine back up for one Mac
    Time Machine back up for 2nd Mac
    Anothe partition for all my data.
    then have that RAID'd to second disc?

    no one?
    and if this set up was on one disc, and one partition for one TM was for the Mini that has this drive attached, would TM back up this whole drive 2x TM Partitions and 1x partitions with data on, ok? it wouldn't get confused by trying to back itself up

  • I am setting up a time machine backup to a external Hard drive.  I want to backup by Mac book Pro running OSX 10.8.5. I would like to Partition the disk and use one partition for Time machine backups and the other for my Lightroom backups. How to do this?

    I want to create a two partition disk. One partition for time machine, the other for Lightroom backup. Currently Time Machine is using the entire drive and it is doing the intial encryption and is about 29% complete after two days.  I've decided that I want to turn encryption off and partition the disk. So I do I start over?

    With the external drive attached, open Finder>Applications>Utilities>Disk Utility.  Select the external drive from the list in the left side panel of the DU window.  In the main window panel, click Partition in the buttons top center of that panel.
    Select the number of partitions you want and adjust their sizes.
    For the first partition, click to highlight the partition, then select the format, Mac OS Extended (Journaled) and then the partition table as GUID [both of those are the defaults].  Click Apply and it will ask to confirm and erase and format that partion...oh, give the partition a name, like Backup. 
    Then repeat those steps for the second partition..and remember to name it...something like Lightroom.
    Close Disk Utilitty and you are ready to send TM to the one partition, and do your backup of Lightroom to the second partition.

  • How to use recovery partition for installing OS

    hello,
             i tried a lot to recover my os with lenove own button for system restore but it did't work and finaly i install OS XP Professional and manualy install all drivers. i must wanna say to lenove mangment that their this way is absolut more complicated then simply provide lenove customer backup cd with laptop. even when i call lenove help line for backup system cd then they demanded money for the cd eigther it must b free for lenove customer as lenove system restore button and partition whole typoligy seems to be total failur. If some one here plz give me advise what should i do with that partition of my hard drive which having system backup from lenove. i make 3 partition of my hdd with i did't remove that partition... so now i want to know that can i still use that partition for system restore or i should remove that partition to make my hdd space free
    Thanx 4 reading
    Mehr

    I had the same problem. I hate Lenovo. I will not buy an another lenovo in my lifetime again. I paid for the OS, then when i call customer care, they ask me for money. Guys look at Dell. I had the system for 7 yrs, i had the OS that i purchased with my laptop. I am loving Dell. Lenovo , your process is wrong. 

  • Partitions for OSX and Win-XP, Photo Library and Scratchdisk

    Hello everybody,
    I have just received my 160GB hard disk for my MacBook, before installing it I have a question:
    I want to run OSX, and windows XPPro
    Within OSX I will be using Aperture
    Within the XPPro environment I will be using CS2
    Question 1.
    for the CS2 envrinment I want to partition a schratchdisk of appr. 20GB. Will I be able to use this schratchdisk from both OSX and XP depending on when I use them (I am also able to run CS2 from OSX)
    Question 2.
    I want to access a single photo library from both OSX and XPPro, do I make an extra partition for this as well ?
    I would be most pleased for any comments or alternative setups on partitioning.
    I will be using Bootcamp.
    Thanks in advance

    I would only partition for scratch disk and running XP. For iPhoto it is best if the pictures are on the same partition as the operating system as it makes it easier to backup your data as my FAQ explains
    http://www.macmaps.com/backup.html
    Before partitioning, be sure to backup your data at least twice, as partitioning may cause the drive to fail if there is anything wrong with it at the low level. Apple´s Boot Camp utility theoretically can partition without wiping the drive, and other methods of running XP on your MacBook may make their own virtual partition to run XP on which you can copy over to a formatted partition with the installer´s Disk Utility if you so desire, or you can keep on your main partition. This FAQ of mine explains what those methods of running XP on Intel Macs are
    http://www.macmaps.com/macosxnative.html#INTEL
    You may find though you don´t have to run XP at all. There are many more compatible Mac compatible titles than you might think which you can find in the above FAQ.

  • Can i set up my drive as encrypted with different partitions for different versions of osx?

    i have some questions about setting up an encrypted drive with different versions of osx installed on separate partitions, and how this choice effects time-machine and the emergency recovery disk.
    I have a new/refurbished macbook pro with a SSD.  it already has mavericks installed.  i want to fully wipe the drive and reinstall everything my self because I'm odd like that.
    first wiping the current drive
    Does the recovery partition get wiped if i use disk-utility to reformat the drive. Even thought the recovery partition does not show up as a partition, when i look at it with disk-utility? i would like to know I've wiped all partitions so that no little bugger gets left with out me knowing. Disk-utility does not show the recovery partition and this makes me concerned it might not wipe it.
    Does mavericks automatically make a recovery partition during the installation process? Or do i need to make a new 1gb partition for the recovery disk?
    Can i have two different partitions on my drive with separate installations of OSX on it? (one for work that i don't update the system os till my current project is done, and the other for experimenting with new software.)
    will time machine back up both of the partitions or just one?
    can i accomplish all this from a bootable usb drive or do i need to do this in target disk mode? 
    do i need to use a more capable utility than the stock apple "disk-utility"?
    when reformatting the drive should i format it as encrypted or let file vault do this after i install mavericks?
    how much does encryption slow down performance for things like photo/video/music production?

    Do a backup before you do anything.
    Does the recovery partition get wiped if i use disk-utility to reformat the drive
    It shouldn't
    Does mavericks automatically make a recovery partition during the installation process?
    Yes.
    Can i have two different partitions on my drive with separate installations of OSX on it?
    Yes.
    will time machine back up both of the partitions or just one?
    It will as long as one partition is not excluded in the Time Machine/Options.
    do i need to use a more capable utility than the stock apple "disk-utility"?
    No, just boot into the Recovery Volume (command  - R on a restart).
    file vault do this after i install mavericks?
    I would let File Vault do that.

  • Looking for a One to Many script to extend the system partition for Windows 7 machines

    Looking for a One to Many script to extend the system partition for Windows 7 machines

    Pre-written scripts can be found in the repository:
    http://gallery.technet.microsoft.com/scriptcenter
    If you can't find what you need, you can request a script (no idea if anyone ever bothers to fulfill these requests though, I know I don't):
    http://gallery.technet.microsoft.com/scriptcenter/site/requests
    Let us know if you have any specific questions.
    Don't retire TechNet! -
    (Don't give up yet - 12,950+ strong and growing)

  • How to add a partition for Windows XP

    Hi, I recently received my T410 with Windows 7 pre-installed. I would like it to be a dual-boot system for Windows 7 and XP.
    I had hoped to be able to create a new primary partition after shrinking the Win7 partition. However, it looks like my system already came with three primary partitions. They are
    Volume, Layout, Type, File System, Status
    Lenovo_Recovery (Q, Simple, Basic, NTFS, Healthy (Primary Partition)
    SYSTEM_DRV, Simple, Basic, NTFS, Healthy (System, Active, Primary Partition)
    Windows7_OS (C, Simple, Basic, NTFS, Healthy (Boot, Page File, Crash Dump, Primary Partition)
    1. Does SYSTEM_DRV need to be a primary partition?
    2. Since it looks like SYSTEM_DRV is just the system drivers, is it possible for me to either (a) change that partition to a logical drive partition or (b) move those files to a folder in the Windows7_OS partition? Then I am assuming that will free up a primary partition for XP and I can carry on with installing XP on that partition.
    3. If yes to question 2, what steps do I need to take to do either (a) or (b)?
    The steps I am currently following for making it a dual-boot system are here: http://www.sevenforums.com/tutorials/8057-dual-boot-installation-windows-7-xp.html, and http://www.sevenforums.com/installation-setup/67236-problems-dual-boot-win-7-win-xp.html. They don't address they case where the computer already has three primary partitions, unfortunately; and I figure this is more of a Lenovo question rather than a question for that forum.
    Thanks for your help.

    not to put you off on what you're about to do, but depending on what version of 7 you have, pro and ultimate have an option to install a full copy of xp pro sp3 in a virtual machine within windows 7.  it will integrate windows 7 files into it so you can access them.  and it it actually is a full version of xp.  you allocate system memory for it to run and you can either have it set to shut down which will free up the resources or hibernate which won't.  although it loads much faster from hibernate.
    before you do anything, i would burn restore discs in case something gets screwed.  make sure you have restore points set.  
    T430u, x301, x200T, x61T, x61, x32, x41T, x40, U160, ThinkPad Tablet 1838-22R, Z500 touch, Yoga Tab 2 Windows 8.1, Yoga Tablet 3 Pro
    Did someone help you today? Press the star on the left to thank them with a Kudo!
    If you find a post helpful and it answers your question, please mark it as an "Accepted Solution"!
    If someone helped you today, pay it forward. Help Someone Else!
    English Community   Deutsche Community   Comunidad en Español   Русскоязычное Сообщество

  • How to create extra partitions for windows 7 after installing boot camp

    hello
    i am pretty new at imac so bare with me and i really need help.
    IMAC
    OS X version  10.9.5  ( i think its os X mavericks ??)
    iMac 21. inch  , Late 2013
    iMac model identifier  iMac 14,1
    i installed boot camp windows 7  and it works perfectly, after i tried to create two partition using disk utility  for windows 7 (ms-dos) format because i really want to separate my files on other partition instead of everything on Boot Camp drive.  after i restart choosing windows OS i cannot go back to windows 7.
    i hold option on the keyboard but does not show window 7 so i went back to Apple ox s , found my boot camp hd on the desktop with files in it.  checked boot camp assistance showing 
    " The startup disk must be formatted as a single Mac OS Extended (Journaled) volume or already partitioned by Boot Camp Assistant for installing Windows. "
    i read alot of article about but i don't seem to understand much about
    i understand alot of people created these kind of topics
    please if someone can help me solve this problem again
    i only want an apple os ,  boot camp for windows 7  and 2 extra partitions for storage  ,  my hard drive is 1TB
    thank you

    Last login: Mon Feb 16 16:11:46 on console
    YLFs-iMac:~ ylf$ diskutil list
    /dev/disk0
       #:                       TYPE NAME                    SIZE       IDENTIFIER
       0:      GUID_partition_scheme                        *1.0 TB     disk0
       1:                        EFI EFI                     209.7 MB   disk0s1
       2:                  Apple_HFS Macintosh HD            999.3 GB   disk0s2
       3:                 Apple_Boot Recovery HD             650.0 MB   disk0s4
    /dev/disk1
       #:                       TYPE NAME                    SIZE       IDENTIFIER
       0:     FDisk_partition_scheme                        *8.0 GB     disk1
       1:                 DOS_FAT_32 WININSTALL              8.0 GB     disk1s1
    YLFs-iMac:~ ylf$ diskutil cs list
    No CoreStorage logical volume groups found
    YLFs-iMac:~ ylf$ sudo gpt -vv -r show /dev/disk0
    Password:
    gpt show: /dev/disk0: mediasize=1000204886016; sectorsize=512; blocks=1953525168
    gpt show: /dev/disk0: PMBR at sector 0
    gpt show: /dev/disk0: Pri GPT at sector 1
    gpt show: /dev/disk0: Sec GPT at sector 1953525167
           start        size  index  contents
               0           1         PMBR
               1           1         Pri GPT header
               2          32         Pri GPT table
              34           6        
              40      409600      1  GPT part - C12A7328-F81F-11D2-BA4B-00A0C93EC93B
          409640  1951845952      2  GPT part - 48465300-0000-11AA-AA11-00306543ECAC
      1952255592     1269536      3  GPT part - 426F6F74-0000-11AA-AA11-00306543ECAC
      1953525128           7        
      1953525135          32         Sec GPT table
      1953525167           1         Sec GPT header
    YLFs-iMac:~ ylf$
    Also i do not have boot camp installed yet because like i said, it messed up my boot camp after i created the partition thats why i removed boot camp, right now i only have APPLE OS

Maybe you are looking for

  • Gateway could not be started

    Hello Expert, Could you please let me know what things needs to be taken care for getting the gateway on track. My dev_disp look trc file: "dev_disp", trc level: 1, release: "700" sysno      10 sid        IDF systemid   562 (PC with Windows NT) relno

  • No audio through tv from mac mini

    I recently purchased a Samsung 32" model 5300 Smart TV, and I have a Mac Mini that are connected by a HDMI cord. The audio portion (while in the computer  mode) will not come through the tv only the mac.  I had a non "smart" with no built in wi fi an

  • Unable to install jre1.1.8 on windows xp sp2

    Hi, I need to install jre 1.1.8 to develop programs for my ipaq jeode jvm. I downloaded the executable and tried to install it on my windows xp machine. At first it seems to go well (the program is launched). Then a message occurs about an incorrect

  • Document Viewer for Nokia 5530 XPressMusic

    Hi everyone! I bought myself the nokia 5530 last week, and today I tried to open a pdf file, and the phone couldn't open it. I thought it had a document viewer and I checked it again on the Nokia site for it's specifications. There it was under specs

  • Automatically display the Interactive Scripting on WinClient's CIC

    Is it possible to automatically display the Interactive Scripting for the cic Agent? I only have one script so there's really no need for the agent to choose when he only has that option. I'm on CRM 4. Thanks in advance, V