Using Tangosol Coherence in conjunction with Kodo JDO for distributing caching

JDO currently has a perception problem in terms of performance. Transparent
persistence is perceived to have a significant performance overhead compared
to hand-coded JDBC. That was certainly true a while ago, when the first JDO
implementations were evaluated. They typically performed about half as well
and with higher resource requirements. No doubt JDO vendors have closed that
gap by caching PreparedStatements, queries, data, and by using other
optimizations.
Aside from the ease of programming through transparent persistence, I
believe that using JDO in conjunction with distributed caching techniques in
a J2EE managed environment has the opportunity to transparently give
scalability, performance, and availability improvements that would otherwise
be much more difficult to realize through other persistence techniques.
In particular, it looks like Tangosol is doing a lot of good work in the
area of distributed caching for J2EE. For example, executing parallelized
searches in a cluster is a capability that is pretty unique and potentially
very valuable to many applications. It would appear to me to be a lot of
synergy between Kodo JDO and Tangosol Coherence. Using Coherence as an
implementation of Kodo JDO's distributed cache would be a natural desire for
enterprise applications that have J2EE clustering requirements for high
scalability, performance, and availability.
I'm wondering if Solarmetric has any ideas or plans for closer integration
(e.g., pluggability) of Tangosol Coherence into Kodo JDO. This is just my
personal opinion, but I think a partnership between your two organizations
to do this integration would be mutually advantageous, and it would
potentially be very attractive to your customers.
Ben

Marc,
Thanks for pointing that out. That is truly excellent!
Ben
"Marc Prud'hommeaux" <[email protected]> wrote in message
news:[email protected]...
Ben-
We do currently have a plug-in for backing our data cache with a
Tangosol cache.
See: http://docs.solarmetric.com/manual.html#datastore_cache_config
In article <[email protected]>, Ben Eng wrote:
JDO currently has a perception problem in terms of performance.
Transparent
persistence is perceived to have a significant performance overheadcompared
to hand-coded JDBC. That was certainly true a while ago, when the firstJDO
implementations were evaluated. They typically performed about half aswell
and with higher resource requirements. No doubt JDO vendors have closedthat
gap by caching PreparedStatements, queries, data, and by using other
optimizations.
Aside from the ease of programming through transparent persistence, I
believe that using JDO in conjunction with distributed cachingtechniques in
a J2EE managed environment has the opportunity to transparently give
scalability, performance, and availability improvements that wouldotherwise
be much more difficult to realize through other persistence techniques.
In particular, it looks like Tangosol is doing a lot of good work in the
area of distributed caching for J2EE. For example, executingparallelized
searches in a cluster is a capability that is pretty unique andpotentially
very valuable to many applications. It would appear to me to be a lot of
synergy between Kodo JDO and Tangosol Coherence. Using Coherence as an
implementation of Kodo JDO's distributed cache would be a natural desirefor
enterprise applications that have J2EE clustering requirements for high
scalability, performance, and availability.
I'm wondering if Solarmetric has any ideas or plans for closerintegration
(e.g., pluggability) of Tangosol Coherence into Kodo JDO. This is justmy
personal opinion, but I think a partnership between your twoorganizations
to do this integration would be mutually advantageous, and it would
potentially be very attractive to your customers.
Ben--
Marc Prud'hommeaux [email protected]
SolarMetric Inc. http://www.solarmetric.com

Similar Messages

  • Using ProShow Gold in conjunction with PSE4

    I would be interested if anybody has any comments on using ProShow Gold in conjunction with PSE4 (under Windows XP).
    I am currently trying out the evaluation version of ProShow Gold 2.6.
    My reasons for trying out ProShow as an alternative to the built in PSE4 slideshow maker are:
    (1) More and better transitions available.
    (2) More control over audio - timing, fading and voice-overs for individual slides.
    (3) DVD burning available without going to Nero or other product.
    (4) Getting away from the problem of backing up slideshows under PSE4.
    ProShow Gold does have a folder pane for picking photos but if the photos required for a particular show are spread over a large number of folders (which is almost certainly the case for most PSE4 users) it is not very convenient.
    I therefore use Organizer, with all its tagging and searching and collection facilities as a "front end" for ProShow from which to drag selected photos. The only problem is that it is rather hard to find room on a single monitor for both programs simultaneously.
    ProShow also allows for an "external image editor" to be specified. But after nominating PSE4, I find that clicking on ProShow's "Edit" button only invokes Organizer, and does not open the image in Editor as I would have expected.
    EDIT - please disregard last paragraph - I did not have things set up properly.

    I use Proshow in preference to Elements. I find the quality of the end product is far better.
    I also use Elements Orgainser to make a collection of the images for the show and then drag them onto Proshow. I've found it works best by dragging the images into Proshows Light Box, which can be detached from the main window. You can shrink the Proshow main window down very small and the Organiser down too. As long as you have all the images selected in the Organiser and can just see one, you can drag this over to Proshow.
    You need to be careful of the order in which they appear in your Collection.
    I've found a 'feature' in Proshow that may just be unique to my setup. The preview, Full Screen or otherwise is not sharp. I have to go into the make Executable dialog and set up my screen dimensions. Once this is done the output is nice and clear.
    Colin

  • HT4236 I've been using the iCloud Photo Stream with my PC for few months already. It work fine every time, but suddenly not work for today

    I've been using the iCloud Photo Stream with my PC for few months already. It work fine every time, but suddenly not work for today

    Hey lalitgupta,
    Thanks for the question. The following article provides troubleshooting steps for Photo Stream:
    iCloud: Photo Stream troubleshooting
    http://support.apple.com/kb/TS3989
    Thanks,
    Matt M.

  • Has anyone successfully used the Intuit Quicken compatible with Lion upgrade for investment tracking?

    Has anyone successfully used the Intuit Quicken compatible with Lion upgrade for investment tracking?

    Although I gave up investment tracking for personal reasons (I had used Quicken Deluxe 2002 for many years for that reason), my understanding is that the new Quicken 2007 for Lion/Mt. Lion is completely the same feature wise as Quicken 2007 for PPC.  It will run on Snow Leopard, too.
    For $15, why not download it and just try it and LET US KNOW!

  • How to use new iPad3 in conjunction with Lightroom on desktop iMac?

    Workflow:  Will this work?  Camera RAW to iPad3 to Lightroom???
    I would like to use my new iPad3 in conjunction with my already-established Lightroom workflow on my desktop Mac.  What I have in mind is to shoot in RAW (Sony SLT A-65 - 24.3 megapixels) and then, in the field, transfer my photos from the camera to the iPad3 (64GB version).  On the iPad I might do some rudimentary editing - deleting duds, etc. - but I will still do all my significant editing in Lightroom on the Mac.  Thus, I need to import the photos, from the iPad, in either RAW or DNG into LightRoom once I get back to my office.  My questions are:
    Is it possible to import RAW photos into the iPad3 and either keep them in RAW or convert them to DNG?
    Is it possible to export RAW or DNG photos from the iPad3 into Lightroom?
    What app(s) do I need on the iPad3 to make this happen?
    FYI - I am using Lightroom3, but will upgrade to Lightroom4 if that makes any difference in whether this will work or not.
    Thanks so much, in advance, for your suggestions and your advice!

    You can load your RAW images to any iPad either using the Camera Connection kit that comes with two dongles. One for inserting an SD card from your camera to load the images. Or the USB dongle which you can connect the USB cord for your camera to the iPad to load the images. There are a few CF card readers that will work with the iPad if your camera uses those a Google search will feret them out for you. Additionally, you can also load images to iPad, iPhone or iPod touch (as well as laptops and desktop cumputers) using WiFi as you capture them using an EyeFi SD card.
    http://www.eye.fi/
    A good app for working with EyeFi on the iPad is Shuttersnitch (though, it does not coordinate with Lightroom)
    http://www.shuttersnitch.com/
    Unfortunately their isn't any RAW processing image editing software for the iPad that Lightroom can make use of the edits. Though there is an app called Photosmith that will allow you to rate, label, add keywords etc. then you can upload that info to Lightroom on your laptop or desktop cimputer and have that info synced to the images. You do have to transfer the image files themselves from the iPad to your workstation seperately.
    http://www.photosmithapp.com/

  • How to use SOA Suite in conjunction with SOA Analysis and Design Tools

    Hi everybody,
    I am a novice in this field and I need some help regarding integrating analysis and design tools with SOA Suite.
    We used to analyze and design with Oracle Designer and use its powerful form generator to develop a system. It almost covered all the software lifecycle and kept the traceability between anlaysis,design and implementation.
    I have studied about the SOA concepts and read some papaer about SOA Suite. I have also installed the SOA demo based on SOA Suite and I found it absolutely amazing, but my problem is that It seems oracle does not have any tools for SOA Analysis and Design. am I right? if so, How can we analyze and design a system based on SOA concepts and implement it using soa suite in such a way that keeps traceability? What tools is used for this purpose?
    It seems that IBM have some tools like Rational Software Architect and Rational Suite which enable people to design and analyze based on SOA concepts and then generates some pieces of code (like oracle designer in old days) but is it possible to design in these tools and then generating codes for SOA Suite ? (for example generating a bpel file from a design model)
    As I told before I am a novice in this field and I would be so grateful if other users can share their expriences regarding this matter.
    Any help would be highly appreciated.
    Thanks in advance,
    Navid

    Learn About All Things SOA:: SOA India 2007:: IISc, Bangalore (Nov 21-23)
    Aligning IT systems to business needs and improving service levels within the constraints of tight budgets has for long been the topmost challenge for CIOs and IT decision makers. Service-oriented Architecture (SOA) provides a proven strategy to clearly address both of these objectives. Creating more agile information systems and making better use of existing infrastructure are two leading factors that are boosting SOA adoption across large, medium, and small Indian industries from the BFSI, Retail, Telecom, Manufacturing, Pharma, Energy, Government and Services verticals in India. If you are an IT decision maker belonging to any of these verticals, SOA India 2007 (IISc, Bangalore, Nov 21-23 2007) presents a unique opportunity to gather cutting-edge business and technical insights on SOA and other related areas such as BPM, BPEL, Enterprise 2.0, SaaS, MDM, Open Source, and more.
    At SOA India 2007, acclaimed SOA analysts, visionaries, and industry speakers from across the world will show you how to keep pace with change and elevate your IT infrastructure to meet competition and scale effectively. The organisers are giving away 100 FREE tickets worth INR 5000 each to the first 100 qualified delegates belonging to the CxO/IT Decision Maker/Senior IT Management profile, so hurry to grab this opportunity to learn about all things SOA. You can send your complete details, including your designation, e-mail ID, and postal address directly to Anirban Karmakar at [email protected] to enrol in this promotion that is open until 12 October 2007.
    SOA India 2007 will also feature two half-day workshops on SOA Governance (by Keith Harrison-Broninski) and SOA Architecture Deep Dive (by Jason Bloomberg). If you are an IT manager, software architect, project leader, network & infrastructure specialist, or a software developer, looking for the latest information, trends, best practices, products and solutions available for building and deploying successful SOA implementations, SOA India 2007’s technical track offers you immense opportunities.
    Speakers at SOA India include:
    •     Jason Bloomberg, Senior Analyst & Managing Partner, ZapThink LLC
    •     Keith Harrison-Broninski, Independent consultant, writer, researcher, HumanEdJ
    •     John Crupi, CTO, JackBe Corporation
    •     Sandy Kemsley, Independent BPM Analyst, column2.com
    •     Prasanna Krishna, SOA Lab Director, THBS
    •     Miko Matsumara, VP & Deputy CTO, SoftwareAG
    •     Atul Patel, Head MDM Business, SAP Asia Pacifc & Japan
    •     Anil Sharma, Staff Engineer, BEA Systems
    •     Coach Wei, Chairman & CTO, Nexaweb
    •     Chaitanya Sharma, Director EDM, Fair Isaac Corporation
    A partial list of the sessions at SOA India 2007 include:
    •     EAI to SOA: Radical Change or Logical Evolution?
    •     BPEL: Strengths, Limitations & Future!
    •     MDM: Jumpstart Your SOA Journey
    •     Governance, Quality, and Management: The Three Pillars of SOA Implementations
    •     Building the Business Case for SOA
    •     Avoiding SOA Pitfalls
    •     SOA Governance and Human Interaction Management
    •     Business Intelligence, BPM, and SOA Handshake
    •     Enterprise 2.0: Social Impact of Web 2.0 Inside Organizations
    •     Web 2.0 and SOA – Friends or Foe?
    •     Achieving Decision Yield across the SOA-based Enterprise
    •     Governance from day one
    •     Demystifying Enterprise Mashups
    •     Perfecting the Approach to Enterprise SOA
    •     How to Build Cost Effective SOA. “Made in India” Really Works!
    For more information, log on to http://www.soaindia2007.com/.

  • Using SBO  client under Mac with Parallels Desktop for Mac

    A client uses Mac for one of the users, and ask me if it is posible use SBO 2005 SP01 with Parallels Desktop for Mac. Any experiences?

    Hi Hector,
    You may also want to check out VirtualPC for MAC. There's also a demo you can check out.
    http://www.microsoft.com/mac/products/virtualpc/virtualpc.aspx?pid=virtu
    Heather

  • Problem with Expiry Period for Multiple Caches in One Configuration File

    I need to have a Cache System with multiple expiry periods, i.e. few records should exist for, lets say, 1 hour, some for 3 hours and others for 6 hours. To achieve it, I am trying to define multiple caches in the config file. Based on the data, I choose the Cache (with appropriate expiry period). Thats where, I am facing this problem. I am able to create the caches in the config file. They have different eviction policies i.e. for Cache1, it is 1 hour and for Cache2, it is 3 Hours. However, the data that is stored in Cache1 is not expired after 1 hour. It expires after the expiry period of other cache i.e.e Cache2.
    Plz correct me if I am not following the correct way of achieving the required. I am attaching the config file here.<br><br> <b> Attachment: </b><br>near-cache-config1.xml <br> (*To use this attachment you will need to rename 142.bin to near-cache-config1.xml after the download is complete.)

    Hi Rajneesh,
    In your cache mapping section, you have two wildcard mappings ("*"). These provide an ambiguous mapping for all cache names.
    Rather than doing this, you should have a cache mapping for each cache scheme that you are using -- in your case the 1-hour and 3-hour schemes.
    I would suggest removing one (or both) of the "*" mappings and adding entries along the lines of:
    <pre>
    <cache-mapping>
    <cache-name>near-1hr-*</cache-name>
    <scheme-name>default-near</scheme-name>
    </cache-mapping>
    <cache-mapping>
    <cache-name>near-3hr-*</cache-name>
    <scheme-name>default-away</scheme-name>
    </cache-mapping>
    </pre>
    With this scheme, any cache that starts with "near-1hr-" (e.g. "near-1hr-Cache1") will have 1-hour expiry. And any cache that starts with "near-3hr-" will have 3-hour expiry. Or, to map your cache schemes on a per-cache basis, in your case you may replace "near-1hr-*" and "near-3hr-*" with Cache1 and Cache2 (respectively).
    Jon Purdy
    Tangosol, Inc.

  • How to use MS Project in conjunction with a time-tracker tool

    In my new job, the company is using a time-tracking tool in a very strict way.
    Allow me to explain...
    Let's say the project has two phases: Ph1 and Ph2.
    Ph1 starts on day 1 and effort is 5 days.
    Ph2 starts after Ph1 and effort is 10 days.
    There are 3 resources available, R1, R2 and R3, and after a careful estimate of skills and avilability it has been decided the following:
    - Ph1 is assigned to R1 from day 1 to day 5, resulting in a duration for Ph1 of 5 days.
    - Ph2 is assigned to...
    ...R2 from day 6 to day 10
    ...R3 from day 8 to day 12
    resulting in a duration of 7 days.
    The above is mapped into the time-tracking tool to allow each resource to track time against Ph1 or Ph2 only in the aplanned timeframe. That is: R3 will be able to track
    hous aganst Ph2 only from day 8 and only till day 12. That is to avoid that resources track more time that what was originally allocated: if this is necessary, it must be explained why, a new plan must be calculated and more budget must be allocated. When
    the ok is given, the tasks in the time-racking tool will be changed accordingly.
    If the above situation is clear, my question is: how would you model this in MS Project?
    If the WBS in MSP is simply
    1. Project X
    1.1 Ph1
    1.2. Ph2
    when allocating R2 and R3 to Ph2, how can I specify that R2 will work only from day 6 to 10 and R3 from day 8 to 12?
    At the moment the only clean solution I found is to go into the task usage view and adjust, for each day, the working hours of each resource assigned to the activity.
    A second less clean solution is to have a WBS like follows
    1. Project X
    1.1 Ph1
    1.2. Ph2
    1.2.1 Ph2.R2
    1.2.2 Ph2.R3
    that is split Ph2 into two sub-activities, one for each resource.
    Thanks fo any help.

    Thanks a lot to both of you for your suggestions.
    Unfortunately the time-tracking tool cannot be changed: it is a new SAP based tool deployed worldwide.
    The problem is that each project involves different teams from differnt countries for a fixed "budget" (or a fixed number of man-days). Therefore there is a global PM in London that will define in the time tracking tool
    a WBS like this
    1. Project X
    1.1 Analysis
    1.1.1 Analyse impact for team in Rome
    1.1.2 Analyse impact for team in Amsterdam
    1.1.3 Analyse impact fr tam in Bangkok
    1.2 Development
    1.2.1 Develop A
    1.2.2 Develop B
    1.2.3 Develop C
    1.3 Customer Acceptance Test
    1.3.1 Prepare test environment by team in Amsterdam
    1.3.1 Test by final user
    1.3.2 Support user testing by team in Rome
    then he would assign the development tasks to the rsources that are supposed to work on it and for only the estimatd time. The reason is easy: he want to make sure that a resource tracks time only against the task she is supposed t work on and for only the
    estimated number of days, to avoid exceeding the budget. If extra effort is required for a resource on a task, before allocating more time on that task, a long battle will start.
    This we cannot change.
    The problem now is on our side, the teams in Rome/Amsterdam/Bangkok: we must keep our own plan in MSP. Our tam in rome is divided in 4 sub-teams, one for each module of the core product, each team with a team
    leader assiging tasks to resources with Asana. Untill las week, for months, there was not PM in Rome leading the four teams and hand and of course there was no global plan for the tasks in Rome.
    My first attempt was to try with a Gantt that would stop at phases level, without going down to task level, that are under responsibility of the 4 team leaders. the reason being that we have something like
    25 projects, some with dependecies with each other. I simply wanted to avoid having one huge MSP file with all projects down to task level: by experience, in my previous project (an ERP upgrade for Oracle EBS in FAO) this approach did not work very well.
    But this is bringing up he problem I described: if I stop at "1.2 development" in the WBS, than when I assign R1 and R3, to mak sure their effort is planned accordingly to what has been setup in the time-raking
    tool, I need to edit work in the task usage resource by resource, activity by activity.
    The other option is to insert under 1.2 development another level with all tasks from Asana, each task being assigned to one resource:
    1.2 Development
    1.2.1 Develop A
    1.2.1.1 Develop class x (by R1)
    1.2.1.2 develop class y (by R3)
    1.2.2 Develop C
    1.2.2.1 Develop new pricing interface (by R4)
    1.2.2.2 Modify all logos (by R7)
    A third option would be to have one dtailed plan for each project down to the Asana task and then one higher level plan summarising all projects, down to phases only.

  • Using external drive in conjunction with music on my internal hard drive

    ok i, like many others, have way too much music haha. i have a 250 GB external drive with all my music on there. This is working great but i'm tired of plugging the hard drive in, using firewire, and then having to eject it when i'm ready to leave. what i want to do is keep a small amount of music on my computer to listen to. I'd like to still be able to plug the hard drive in and listen to the remainder of my music through itunes. but i'm unsure of how to do this.
    my library will be the music that is on my internal drive........so is there a way to set up an alternate library for my external drive? kinda like a playlist or new folder........any help is much appreciated. hope this all makes sense.....if not i'll try and reexplain.

    also sorry if this is a reposted question.....i tried to search around the discussions and couldn't find the answer to my specific situation....actually didn't even see anyone trying to do what i'm talkin about.......again thanks for any help

  • Unable to use dtrace USDT in conjunction with -xipo=2?

    We're currently using Studio 12/CC 5.9 and our C++ application includes an internal static library into which we embed several of our own dtrace userland probes (USDT).
    We're looking at updating to Studio 12.3/CC 5.12 but have run into problems with the dtrace probes in release/optimised builds only.
    The basic build process for the library is as follows:
    1) Use dtrace -h to generate a probe header file from a foo.d definition file
    2) Compile (C++) source file foo.cpp which references the macros from (1) to create foo.obj
    3) Use dtrace -G to generate a dtrace probe object foo_probe.obj and update foo.obj
    4) Use CC -xar to create a static library libfoo.a containing both objects from (3)
    This works fine in debug builds with both CC 5.9 and CC 5.12, however release builds with CC 5.12 fail when we try to link a binary against the static library with "Undefined symbol" errors referencing the dtrace probe functions in foo.obj.
    I noticed that with CC 5.12 step (4) above seems to 'undo' the changes made to foo.obj file at step (3). I also noticed that step (4) is much slower with 5.12, hence suggesting that the compiler is doing some extra work, which then led me down the path of removing -xipo=2 from my commandline, at which point I get a library which I can link ok without missing symbols.
    Questions:
    1) Have there been changes to CC -xar which would explain this (eg previously -xipo=2 was ignored for archive creation)?
    2) If it does 'make sense' that I'm having problems here, is there any better solution than simply removing -xipo=2 from the library creation step? Obviously I'd like to get the performance benefits if possible, but I suppose that it may simply not be compatible with the way dtrace USDT providers work...?
    Thanks,
    Matt.

    Hi,
    The way that -xipo works is that at link time it recompiles all the object files doing inlining and optimisation between the files at that point. This will have the effect of undoing the dtrace -G processing, leaving just the raw dtrace function calls.
    You are using archive libraries so one workaround is to use the flag -xipo_archive=... to either none - meaning don't process the archive libraries or readonly which will inline code from the archive libraries but won't inline code into the archive libraries.
    Regards,
    Darryl.

  • I have been using firefox on my desktop with windows xp for years with no problems and suddenly refused to open. The error message reads "unable to locate component. plc4.dll was not found." What does that mean? How do I fix it?

    The error message also suggests that I unintall and reinstall firefox, I tried that but it does not fix the problem.

    Which security software (firewall, anti-virus) do you have?
    If you use ZoneAlarm Extreme Security then disable Virtualization and reinstall Firefox.
    Other security software may have similar features to prevent Firefox from updating files properly.
    See these pages for a similar issue with corrupted files:
    *http://kb.mozillazine.org/Browser_will_not_start_up#XULRunner_error_after_an_update
    *[[/questions/880050]]

  • Using JHS tables and hashing with salt algorithms for Weblogic security

    We are going to do our first enterprise ADF/JHeadstart application. For security part, we are going to do the following:
    1. We will use JHS tables as authentication for ADF security.
    2. We will use JAAS as authentication and Custom as authorization.
    2. We need to use JHeadStart security service screen in our application to manage users, roles and permission, instead of doing users/groups management within Weblogic.
    3. We will create new Weblogic SQL Authentication Provider.
    4. We will store salt with password in the database table.
    5. We will use Oracle MDS.
    There are some blogs online giving detail steps on how to create Weblogic SQL Authentication Provider and use JHS tables as authentication for ADF security. I am not sure about the implementation of hashing with salt algorithms, as ideally we'd like to use JHS security service screen in the application to manage users, roles and permission, not using Weblogic to do the users/groups management. We are going to try JMX client to interact with Weblogic API, looks like it is a flexiable approach. Does anybody have experience on working with JMX, SQL Authentication Provider and hashing with salt algorithms? Just want to make sure we are on the right track.
    Thanks,
    Sarah

    To be clear, we are planning on using a JMX client at the Entity level using custom JHS entitiy classes.
    BradW working with Sarah

  • HT1349 I was using a joint itunes account with my partner for downloading apps etc, I have an iphone 4 and partner as an ipod but i thought it would best to have my own account,that was fine but

    I would like to have my apps under my own itunes account as most of them where with a joint account as i have an app where it comes up under my own account but when you want to purchase something from that app it wont allow, as it was 1st bought on the joint account and i dont want to loose all the information, and i also have a few other apps that has alot of info that is stored on them which i dont want to loose how do i rectify this?
    Thanks
    Finlay

    All Apps Downloaded are Tied to the Apple ID that First Purchased them... You cannot Merge Apple IDs...
    Apple ID FAQ
    http://support.apple.com/kb/HE37

  • Mirro Raid can you use a single 4TB disk with 2 partitions for mirrored raid ?

    Hello,
    I have a new 4TB hardrive.
    After lots of reading, i was thinking that i could partician the drive to 2 (2TB) Then create mirror raid.
    Essentially thinking that i would have photos on one of the (2TB particions) and they would mirror on the other  ?? as a double copy storage.
    I can partician okay and the try to drag the two particians onto the "mirror raid set" and ithe following message displays :
    "Can't add Seagate expansion disk" to the raid set because another volume from this disk is already part of the RAID set"
    I'm now thinking i need to purchase another 4tb hardrive for this to work ?
    Or if you "mirror raid set" is it actually particioning the drive and creating a copy ?
    Forgive me... i'm not really very up skilled with disk utility, and basically just don't understand.
    Looking forward to some help and advise
    warm regards Tracy Gr

    It doesn't make any sense to mirror on the same drive. The idea behind mirroring is that should one disk in the raid set fail your data is safe on the other disk. Mirroring to partitions, if it's even possible, would only bring you disaster when the drive fails because you would lose the complete raid set and all of your data. If you insist on mirroring buy another drive.

Maybe you are looking for

  • Ati radeon hd 4870 for 1st generation mac pros?

    unfortunately the ati radeon hd 4870 is working for all mac pros except the first generation: http://store.apple.com/us/product/MB999ZM/A?fnode=MTY1NDA5OQ&mco=NDE4NDMxOA are there any other graphics cards that have a mini displayport connector and ar

  • Intermitent Black screen on macbook air

    My macbook air screen goes black whilst working.  It comes back but it appears to stay black longer now.  I also have had the screen go grey with an alarm sound.  Has anyone experienced this?  Should I worry ?

  • I want to delete all of my contacts/addresses in Mail

    How can I delete all of my mail addresses? I have thousands and many duplicates I want to start fresh.

  • Linux RH 6.2 & Edaq - driver

    Linux RH 6.2 & Edaq - driver Few experiences Edaq for Linux installation. - NationaI Instruments ftp area has latest driver version 0.9.4 - edaq-0.9.4 turns (make gives some warnings). - c- test programs ok (my card is 6025e). - Labview for Linux 5.1

  • HT1657 Do I have to have Internet service to play an iTunes movie rental?

    Going on vacation and want rent some movies.  If I rent from iTunes, do I have to be connected to Internet to play the movies? Or do they store and play from my iPad?