Partition in informatica
I am not able to see partition tab in Informatica sessions, though we have full license of Siebel Analytics 7.8.
Do we need to purchase the license separately for partitioning, or is there any option to enable it, that I am missing?
Thanks,
Ravish
you need the [OBIA forum|http://forums.oracle.com/forums/forum.jspa?forumID=410]
Similar Messages
-
Using INTERVAL partition with Informatica
Curious to know if anyone has attempted using Oracle's INTERVAL partitioning method in conjunction with Informatica. My attempts have all resulted in an ORA-14400 error. However, slightly modifying the source qualifier SQL and executing in SQLPlus dynamically creates the partitions as expected. When standard RANGE partitions are created (without the INTERVAL option), Informatica will load the target table as expected.
Hence you just want to add a number of hours to a date?
select LOCATION_DATE + LOCATION_TIMEZONE/24 NEW_TIME from tablexMax
http://oracleitalia.wordpress.com
Edited by: Massimo Ruocchio on Feb 19, 2010 12:21 AM -
Split the incoming records into partitions
Guru's,
I have a table with around 17,000 records each day to get processed through informatica. All these records are distinct in every attribute and the status of the records will be in 'PENDING_CLEAR'.
My requirement is to read these records into 8 partitions (if possible equal partitions) so that i could run in 8 partitions in informatica.
I have the below options available in informatica
1. Pass through
2. Key range and
3. Database partioning.
Option 2 and Option 3 are not possible, since the data base is not partitioned and i will not be able to provide the key ranges, since the primary key on the table is an auto-increment number.
In pass through i will be reading the same 17000 records across all the pipelines and its a challenge to handle it in infomatica.
Any suggestions of paritioning the records using sql??
Thanks a lot for your time.>
Yes. But the way informatica has got options to read and process the record it is complicating more. I am trying to sort the issue related to informatica.
>
And that reinforces what I said above: you are likely 'misusing' Informatica if you have that sort of problem.
A middle-tier application like Informatica does not provide a 'one size fits all' solution without making (in some cases serious) performance trade-offs.
Such tools function best when they act as the 'middle man' among multiple databases. Need to do some basic data manipulation and source data from both Oracle and DB2? A tool can access both DBs with separate connections and help you merge or filter the data. The tool can also assist in performing some basic data transformations.
Need to sort data and apply complex business rules? That work should be done in the database.
The primary goal, when working with multiple database sources or targets, is the do AS MUCH work as possible in the DB; that is what the DB is designed for.
Get the data INTO the DB as quickly and efficiently as possible and then do the work there. Mid-tier tools can be very effective at that. They can source data from multiple systems, do basic data cleansing and filtering to reject/fix/report 'dirty' data and they can consolidate multiple data streams to feed ONE target stream.
Tools such as informatica use a proprietary language and syntax. DBs like Oracle and DB2 base their syntax on well-established ANSI standards. There is NO comparison.
Versions of Informatica that I worked with didn't even allow you to access the code very easily. The code for individual businsess rules was locked into the objects it was attached to (some flexibility using maps and maplets). A developer has to 'open' the object in order to get access to the actual code that was being use.
The PL/SQL code used in DBs is readily accessible and easy to access.
The first question I would ask about your issue is: can this work (processing each record) be done in the database. If the answer is yes then that is most likely where the work should be being done. The 'tool' should then be used to do as much 'preprocessing' of the data as possible and then get the cleansed data into the database (into staging tables) as quickly as possible.
If you insist on going the way you are headed you will need to add code to 'chunk' the data.
See my reply if this thread for a description of how to use the NTILE function to do that
Re: 10g: parallel pipelined table func - distributing DISTINCT data sets
And for more discussion of why you should NOT be going this way here is a thread from last year with a question very similar to yours also using Informatica
Looking for suggestion on splitting the output based on rowid/any other
>
I have below query.I need to split total rows pulled by the query into two halves by maintaining data accuracy.
Eg if query returns 500 rows,i need a query which return 250 another with 250.
Reason :
I have a informatica ETL job which pull data by above query and updates into target.Run time is 2hrs
In order to reduce the run time,using 2 pass through partitions which will run the query in two partitions and run time will be reduced to exact 1 hour.
>
Sound familiar? Read what the responders (including me) had to say. -
How to implement qualify row_number over(Partition by col order by col) and char2hexint in informatica in a way that is supported by pdo?
Apart from sql overriding or using stored procedure ,is there any other solution?Can rank transformation help here? ....But, I guess rank transformation cannot be pushed down..
help please !Hi Saichand,
The links were helpful. But i am not getting how it is working in test and not in live.
I found one difference while deploying . The column names of the object both in Test and Production had spaces.For E.g: Full Name
When this column Full Name is pulled to the repsository in test , it automatically put double quotes for the column names in the physical sql when it hits the database.
But, In production , when I pulled the column the report gave error as Invalid Identifier since OBIEE generated column name as Full Name without double quotes.
Then I changed the column in Phyiscal Layer repository by having double Quotes for all columns. Afte that report worked fine.
Whether this has caused any issue in Row Partition.
Is there any setting to have column name in Double Quotes ?
Thanks,
Johnny -
Error from the session log between Informatica and SAP BI
HI friends,
I am working extraction from bi by Informatica 8.6.1.
now, I start the process Chain from bi, and I got a error from Informatica's session log.Pls help me to figure out what's going on during me execution.
Severity Timestamp Node Thread Message Code Message
INFO 2010-12-17 11:01:31 node01_dcinfa01 DIRECTOR TM_6228 Writing session output to log file [D:\Informatica\PowerCenter8.6.1\server\infa_shared\SessLogs\s_taorh.log].
INFO 2010-12-17 11:01:31 node01_dcinfa01 DIRECTOR TM_6014 Initializing session [s_taorh] at [Fri Dec 17 11:01:31 2010].
INFO 2010-12-17 11:01:31 node01_dcinfa01 DIRECTOR TM_6683 Repository Name: [RepService_dcinfa01]
INFO 2010-12-17 11:01:31 node01_dcinfa01 DIRECTOR TM_6684 Server Name: [IntService_dcinfa01]
INFO 2010-12-17 11:01:31 node01_dcinfa01 DIRECTOR TM_6686 Folder: [xzTraining]
INFO 2010-12-17 11:01:31 node01_dcinfa01 DIRECTOR TM_6685 Workflow: [wf_taorh] Run Instance Name: [] Run Id: [43]
INFO 2010-12-17 11:01:31 node01_dcinfa01 DIRECTOR TM_6101 Mapping name: m_taorh.
INFO 2010-12-17 11:01:31 node01_dcinfa01 DIRECTOR TM_6964 Date format for the Session is [MM/DD/YYYY HH24:MI:SS.US]
INFO 2010-12-17 11:01:31 node01_dcinfa01 DIRECTOR TM_6703 Session [s_taorh] is run by 32-bit Integration Service [node01_dcinfa01], version [8.6.1], build [1218].
INFO 2010-12-17 11:01:31 node01_dcinfa01 MANAGER PETL_24058 Running Partition Group [1].
INFO 2010-12-17 11:01:31 node01_dcinfa01 MANAGER PETL_24000 Parallel Pipeline Engine initializing.
INFO 2010-12-17 11:01:31 node01_dcinfa01 MANAGER PETL_24001 Parallel Pipeline Engine running.
INFO 2010-12-17 11:01:31 node01_dcinfa01 MANAGER PETL_24003 Initializing session run.
INFO 2010-12-17 11:01:31 node01_dcinfa01 MAPPING CMN_1569 Server Mode: [UNICODE]
INFO 2010-12-17 11:01:31 node01_dcinfa01 MAPPING CMN_1570 Server Code page: [MS Windows Simplified Chinese, superset of GB 2312-80, EUC encoding]
INFO 2010-12-17 11:01:31 node01_dcinfa01 MAPPING TM_6151 The session sort order is [Binary].
INFO 2010-12-17 11:01:31 node01_dcinfa01 MAPPING TM_6156 Using low precision processing.
INFO 2010-12-17 11:01:31 node01_dcinfa01 MAPPING TM_6180 Deadlock retry logic will not be implemented.
INFO 2010-12-17 11:01:31 node01_dcinfa01 MAPPING SDKS_38029 Loaded plug-in 300320: [PowerExchange for SAP BW - OHS reader plugin 8.6.1 build 183].
INFO 2010-12-17 11:01:31 node01_dcinfa01 MAPPING SDKS_38024 Plug-in 300320 initialization complete.
INFO 2010-12-17 11:01:31 node01_dcinfa01 MAPPING PCCL_97003 [WARNING] Real-time session is not enabled for source [AMGDSQ_IS_TAORH]. Real-time Flush Latency value must be 1 or higher for a session to run in real time.
INFO 2010-12-17 11:01:31 node01_dcinfa01 MAPPING SDKS_38016 Reader SDK plug-in intialization complete.
INFO 2010-12-17 11:01:31 node01_dcinfa01 MAPPING TM_6307 DTM error log disabled.
INFO 2010-12-17 11:01:31 node01_dcinfa01 MAPPING TE_7022 TShmWriter: Initialized
INFO 2010-12-17 11:01:31 node01_dcinfa01 MAPPING TM_6007 DTM initialized successfully for session [s_taorh]
INFO 2010-12-17 11:01:31 node01_dcinfa01 DIRECTOR PETL_24033 All DTM Connection Info: [<NONE>].
INFO 2010-12-17 11:01:31 node01_dcinfa01 MANAGER PETL_24004 PETL_24004 Starting pre-session tasks. : (Fri Dec 17 11:01:31 2010)
INFO 2010-12-17 11:01:31 node01_dcinfa01 MANAGER PETL_24027 PETL_24027 Pre-session task completed successfully. : (Fri Dec 17 11:01:31 2010)
INFO 2010-12-17 11:01:31 node01_dcinfa01 DIRECTOR PETL_24006 Starting data movement.
INFO 2010-12-17 11:01:31 node01_dcinfa01 MAPPING TM_6660 Total Buffer Pool size is 1219648 bytes and Block size is 65536 bytes.
INFO 2010-12-17 11:01:31 node01_dcinfa01 READER_1_1_1 OHS_99013 [INFO] Partition 0: Connecting to SAP system with DESTINATION = sapbw, USER = taorh, CLIENT = 800, LANGUAGE = en
INFO 2010-12-17 11:01:32 node01_dcinfa01 READER_1_1_1 OHS_99016 [INFO] Partition 0: BW extraction for Request ID [163] has started.
Edited by: bi_tao on Dec 18, 2010 11:46 AMINFO 2010-12-17 11:01:33 node01_dcinfa01 WRITER_1_*_1 WRT_8167 Start loading table [VENDOR] at: Fri Dec 17 11:01:32 2010
INFO 2010-12-17 11:01:33 node01_dcinfa01 WRITER_1_*_1 WRT_8168 End loading table [VENDOR] at: Fri Dec 17 11:01:32 2010
INFO 2010-12-17 11:01:33 node01_dcinfa01 WRITER_1_*_1 WRT_8141
Commit on end-of-data Fri Dec 17 11:01:32 2010
===================================================
WRT_8036 Target: VENDOR (Instance Name: [VENDOR])
WRT_8044 No data loaded for this target
INFO 2010-12-17 11:01:33 node01_dcinfa01 WRITER_1_*_1 WRT_8143
Commit at end of Load Order Group Fri Dec 17 11:01:32 2010
===================================================
WRT_8036 Target: VENDOR (Instance Name: [VENDOR])
WRT_8044 No data loaded for this target
INFO 2010-12-17 11:01:33 node01_dcinfa01 WRITER_1_*_1 WRT_8035 Load complete time: Fri Dec 17 11:01:32 2010
LOAD SUMMARY
============
WRT_8036 Target: VENDOR (Instance Name: [VENDOR])
WRT_8044 No data loaded for this target
INFO 2010-12-17 11:01:33 node01_dcinfa01 WRITER_1_*_1 WRT_8043 ****END LOAD SESSION****
INFO 2010-12-17 11:01:33 node01_dcinfa01 WRITER_1_*_1 WRT_8006 Writer run completed.
INFO 2010-12-17 11:01:33 node01_dcinfa01 MANAGER PETL_24031
RUN INFO FOR TGT LOAD ORDER GROUP [1], CONCURRENT SET [1] *****
Thread [READER_1_1_1] created for [the read stage] of partition point [AMGDSQ_IS_TAORH] has completed. The total run time was insufficient for any meaningful statistics.
Thread [TRANSF_1_1_1] created for [the transformation stage] of partition point [AMGDSQ_IS_TAORH] has completed. The total run time was insufficient for any meaningful statistics.
Thread [WRITER_1_*_1] created for [the write stage] of partition point [VENDOR] has completed. The total run time was insufficient for any meaningful statistics.
INFO 2010-12-17 11:01:33 node01_dcinfa01 MANAGER PETL_24005 PETL_24005 Starting post-session tasks. : (Fri Dec 17 11:01:33 2010)
INFO 2010-12-17 11:01:33 node01_dcinfa01 MANAGER PETL_24029 PETL_24029 Post-session task completed successfully. : (Fri Dec 17 11:01:33 2010)
INFO 2010-12-17 11:01:33 node01_dcinfa01 MAPPING SDKS_38025 Plug-in 300320 deinitialized and unloaded with status [-1].
INFO 2010-12-17 11:01:33 node01_dcinfa01 MAPPING SDKS_38018 Reader SDK plug-ins deinitialized with status [-1].
INFO 2010-12-17 11:01:33 node01_dcinfa01 MAPPING TM_6018 The session completed with [0] row transformation errors.
INFO 2010-12-17 11:01:33 node01_dcinfa01 MANAGER PETL_24002 Parallel Pipeline Engine finished.
INFO 2010-12-17 11:01:33 node01_dcinfa01 DIRECTOR PETL_24013 Session run completed with failure.
INFO 2010-12-17 11:01:34 node01_dcinfa01 DIRECTOR TM_6022
SESSION LOAD SUMMARY
================================================
INFO 2010-12-17 11:01:34 node01_dcinfa01 DIRECTOR TM_6252 Source Load Summary.
INFO 2010-12-17 11:01:34 node01_dcinfa01 DIRECTOR CMN_1537 Table: [AMGDSQ_IS_TAORH] (Instance Name: [AMGDSQ_IS_TAORH]) with group id[1] with view name [Group1]
Rows Output [0], Rows Affected [0], Rows Applied [0], Rows Rejected[0]
INFO 2010-12-17 11:01:34 node01_dcinfa01 DIRECTOR TM_6253 Target Load Summary.
INFO 2010-12-17 11:01:34 node01_dcinfa01 DIRECTOR CMN_1740 Table: [VENDOR] (Instance Name: [VENDOR])
Output Rows [0], Affected Rows [0], Applied Rows [0], Rejected Rows [0]
INFO 2010-12-17 11:01:34 node01_dcinfa01 DIRECTOR TM_6023
===================================================
INFO 2010-12-17 11:01:34 node01_dcinfa01 DIRECTOR TM_6020 Session [s_taorh] completed at [Fri Dec 17 11:01:33 2010]. -
This is not a PC/PWX problem, but a mainframe security problem. TIDSS01.POS.IBD.DR0100.DAT is not a GDG file name.It might be the base for a GDG. A GDG file name would look something like eitherTIDSS01.POS.IBD.DR0100.DAT(+1)orTIDSS01.POS.IBD.DR0100.DAT.G001V00 So I suspect that you have the wrong file name.Please talk with your mainframe team.
HI Dan, I am very new to using power exchnage please help me through this issue, Below is the details : My data map name :postest.test1_POS
Copy book name which is used to create the power exchange Datamap : TIDSS01.ACTRLP.TEST(POSCPY)
Main frame GDG name : TIDSS01.POS.IBD.DR0100.DAT Below are the Session properties i have set in informatica workflow:Schema Name Override : postest
Map Name Override : test1_POS
PWX Partition Strategy : Overrides driven
Space : CYLINDER
File Name Override :TIDSS01.POS.IBD.DR0100.DAT(+1)
I am getting below error: PWXPC_12190
Message: [ERROR] Transformation [test1_POS]: A permanent error has been encountered in PowerExchange: [
[Informatica][SCLI PWX Driver] PWX-00267 DBAPI Error. DB_INSERT failed for file postest.test1_POS.
[Informatica][SCLI PWX Driver] PWX-01279 DBNTC INSERT failed for file postest.test1_POS. Rcs 1274/2019/268.
[Informatica][SCLI PWX Driver] PWX-01274 DBNTC INSERT Failed for file postest.test1_POS, rcs 260/2019/268.
[Informatica][SCLI PWX Driver] PWX-02019 SQL insert failure. SQLCODE = 268.
[Informatica][SCLI PWX Driver] PWX-00268 DBAPI Error. DB_OPEN failed for file TIDSS01.POS.IBD.DR0100.DAT.
[Informatica][SCLI PWX Driver] PWX-00220 DYNALLOC failed for file TIDSS01.POS.IBD.DR0100.DAT RCs = 9700/0.
[Informatica][SCLI PWX Driver] PWX-00221 DATA SET: TIDSS01.POS.IBD.DR0100.DAT WITH RETURN CODE 08 REASON CODE 00
[Informatica][SCLI PWX Driver] PWX-00221 RACF FUNCTION: RACDEF FOR
[Informatica][SCLI PWX Driver] PWX-00221 IGD308I DATA SET ALLOCATION REQUEST FAILED -
[Informatica][SCLI PWX Driver] PWX-00221 IKJ56893I DATA SET TIDSS01.POS.IBD.DR0100.DAT NOT ALLOCATED+
[Informatica][SCLI PWX Driver] PWX-07404 Permanent error set by Open call on file "TIDSS01.POS.IBD.DR0100.DAT" because dynamic allocation failed. rc=4
[Informatica][SCLI PWX Driver] PWX-07515 Insert call for table postest.test1_POS met a permanent error. Return codes 267 2019 268.
] -
Informatica Web Services - Multiple Occurring Elements - PK and K
I'm pretty sure that this is not the first error message in the session log; please check if there's any other message which precedes the message you've listed.If there are no accompanying messages, then please look in the Integration Service log whether you find any hints there.If that doesn't help either, I fear you will have to open a service request at Informatica Global Customer Support. Regards,Nico
I have a web service that does address validation and provides a suggestion list of addresses. I understand there will be a PK - FK relationship in the SOAP response. I am currently getting the following error: SDKS_38502 Plug-in #302600's target [o_Output::X_n4_Envelope: Partition 1] failed in method [execute]. Is this a writer issue? If so, what is the solution to fix it? Thanks,Eric
-
How to Seggragate Header, Footer and Trailer Records in Informatica Cloud ?
Hi All, This is my source file Structure which is Flat File. Source File : "RecordType","Creation Date","Interface Name""H","06-08-2015","SFC02""RecordType","Account Number","Payoff Amount","Good Through Date""D","123456787","2356.14","06-08-2015""D","12347","2356.14","06-08-2015""D","123487","235.14","06-08-2015""RecordType","Creation Date","TotalRecordCount""T","06-08-2015","5" The Source File has to be loaded into three targets for Header , Detail and Trailer records separately. Target Files: File 1 : Header.txt
"RecordType","Creation Date","Interface Name""H","06-08-2015","SFC02" File 2 : Detail.txt "RecordType","Account Number","Payoff Amount","Good Through Date""D","123456787","2356.14","06-08-2015""D","12347","2356.14","06-08-2015""D","123487","235.14","06-08-2015" File 3 : Trailer.txt "RecordType","Creation Date","TotalRecordCount""T","06-08-2015","5" I tired this solution below : 1. Source ---> Expression ]-----filter 1---Detail.txt -----filter 2---Trailer.txt In source , I will read the records starting from 3rd row. This is because, if i read from first row, the detail part contains more fields when compared to Header part. Header Part contains only three fields. So it is taking only first three fields records in Detail Section as well.That's why I am skipping the first two records(Header fields and header record) .. refer the example.. In filter 1, condition is Record_Type = 'D'. In Filter 2 , condition is Record_Type = 'T'.So, the filter 1 will load to Detail.txt and Filter 2 Will load to Trailer.txt In task , pre session command, Calling the windows .bat script to fetch the first two lines and load into Header.txt This solution is working fine.. My query is can we use two pipeline flow in a same mapping in Informatica cloud.? Pipeline Flow 1 Source ---> Expression ]-----filter 1---Detail.txt -----filter 2---Trailer.txtPipeline Flow 2 Source ---> Expression ]-----filter 3(Record_Type='H')---Header.txt Source file is same in two flows. In first flow, I will read from the third row, skipping the header section as I mentioned earlier.In second flow , I ll read the entire content and take only header record and load into target. If I add the flow 2 to existing flow 1. I am getting the below error..TE_7020 Internal error. The Source Qualifier [Header_Source] contains an unbound field [Creation_Date]. Contact Informatica Global Customer Support. 1. Do informatica Cloud supports, two parallel flow in a same mapping ?2. Any other best solution for my requirement? Since I am new to Informatica Cloud, Can anyone suggest any other solution if you have?? It will be more helpful if you guys suggest a good solution .. ThanksSindhu RavindranWe are using a Webservices Consumer Tranformation in our mapping to connect to RightFax Server using a WSDL url via Business Service in a mapping.Here in the mapping, where we are sending the input parameters through a flat file and then connecting to Rightfax Server via WS Consumer transformation and then fetching the data and writing to a Flat File. 07/28/2015 10:10:49 **** Importing Connection: Conn_000A7T0B0000000000EM ...07/28/2015 10:10:49 **** Importing Source Definition: SRC_RightFax_txt ...07/28/2015 10:10:49 **** Importing Target Definition: GetFax_txt ...07/28/2015 10:10:49 **** Importing Target Definition: FaultGroup_txt ...07/28/2015 10:10:49 **** Importing SessionConfig: default_session_config ... <Warning> : The Error Log DB Connection value should have Relational: as the prefix. <Warning> : Invalid value for attribute Error Log DB Connection. Will use the default value Validating Source Definition SRC_RightFax_txt... Validating Target Definition FaultGroup_txt... Validating Target Definition GetFax_txt...07/28/2015 10:10:49 **** Importing Mapping: Mapping0 ... <Warning> : transformation: RIghtfax - Invalid value for attribute Output is repeatable. Will use the default value Never [transformation< RIghtfax > ] Validating transformations of mapping Mapping0...Validating mapping variable(s).07/28/2015 10:10:50 **** Importing Workflow: wf_mtt_000A7T0Z00000000001M ... <Warning> : Invalid value Optimize throughout for attribute Concurrent read partitioning. Will use the default value Optimize throughput [Session< s_mtt_000A7T0Z00000000001M > --> File Reader< File Reader > ] <Warning> : The value entered is not a valid integer. <Warning> : Invalid value NO for attribute Fail task after wait time. Will use the default value Successfully extracted session instance [s_mtt_000A7T0Z00000000001M]. Starting repository sequence id is [1048287470] Kindly provide us a solution. Attached are logs for reference.
-
Informatica Powercenter Repository connection issue in BIC2g-BI-LNX-2009
I have the BIC2g-BI-LNX-2009 image installed in a partition in my laptop.
I launch it using my VMWare player. My host PC OS is Windows 2008 Server Standard R2.
Everything is working fine so far except the Informatica client.
I recently installed the Informatica Power Center 8.6.0 client in the host OS,
as per the instruction given in [Page 52] of BIC2g-BI-LNX_June2009_Client_Install_Guide.pdf
Then I tried to add a repository as in Page 69 of that document.
After Step 4 and Step 5. I received the following error message.
Step 4: Add a domain
Step 5: Configure Domain
Unable to get repositories for domain Domain_Oracle2go.
Error: [PCSF_46008] Cannot connect to domain [Domain_Oracle2go]
to look up service [coreservices/DomainConfigurationService].
Has anyone else encountered the issue? I'm completely new to Informatica.
Any fix for this?
Thanks
zYou have to give your repository name that you have created while installing Informatica instead Oracle_BI_BW_Base.
Let us know about the result.
Thanks, -
Format and partition external drive if you want dual use Mac / PC
I had purchased from the Apple store in France a portable hard drive Iomega eGO USB 2.0/FireWire ov 250 Go capacity (P/N 31713900; Model: RPHD-C; S/N: FEAJ02011V)
Originally formatted HFS+, it would mount on any of my Mac desktops with Firewire, easily on the iMac using USB and with great difficulty on the iBook with the two USB plugs (together). It did not mount on any windows PC I had used for tests (so reformatting it FAT(32) or NTFS was *not an option).
I had reformatted it FAT32 using the iMac under mac OS 10.5 for use on multiple computers including Windows PC's. The drive would now accept to:
Mount on the iMac and iBook using Firewire
Mount on the iMac using USB, but it will NOT:
1 - Mount on the iBook using USB, nor
2 - Mount on any Windows PC using USB
The solution was found at the office with our IT helpdesk.
Whether I format it FAT 32 or NTFS (using the Paragon NTFS for mac OS X 10.5) on my iMac under OS 10.5, including when I do the same on another external drive than the Iomega, the PC would not recognise it while it would always mount on a Mac and it was even impossible on the PC to reformat it. The solution is (at least in windows world), you need to (1) format the drive, AND (2) partition the drive, even if this involves creating a single partition. Using Disk Utility of the Mac, I had only formatted the drive and not partitioned it into a single partition and Disk utility did not request that from me. The drive as prepared was perfectly usable on any Mac anyway.
The cure was to go back to the imac which had formatted it, mount it (it mounts), (1) reformat and (2) partition, using a single partition.
Then, the drive would instantly be recognised on the PC as a F drive, whether under FAT 32 or under NTFS.
The blame is in me and on the Apple Drive utility which did not help me (trust it would have been worse in windows world, but this is a bad mark on disk utility)
My suggestion to Apple would be that Disk Utilisty should tell us, once we have formatted a drive (HFS+, FAT 32 or NTFS using Parangon) that we are not done yet and still must create the partition(s), even if we only need one partition.
HTHHi Michel-Ange
You are talking to other user like yourself here and not Apple. If you wish to make a suggestion to Apple, I suggest you do it at this site - http://www.apple.com/feedback/macosx.html
Allan -
Data Recovery from Partitioned and formatted Bit Locker Encrypted Drive
Recently because of some issues in windows 7 installation from windows 8 installed OS. it was giving as the disc is dynamic windows can not be installed on it. so at last after struggling hard no other solution i partitioned and formatted my whole
drive so all data gone included the drive which was encrypted by bit lockers.
For recovery i used many software such as ontrack easy recover, get data back, recovery my files professional edition but still i couldnt able to recover my data from that drive. then i found some suggestion Using CMD to decrypt my data first
http://technet.microsoft.com/en-us/library/ee523219(WS.10).aspx
where it shows it successfully decrypt my data at that moment my drives were in RAW format excluding on which windows is installed and then in CMD i check Chdsk which also shows no problem found. but now problem is still i coudnt able to recover
my data then i format the drive D and again tried to recover data using above software after decryption still no result.
Now i need assistance how i can recover my encrypted drive as it was partitioned and also formatted but decrypted also as i have its recovery key too. thanksHi ,
I am afraid that we cannot get the data back if the drive has been formatted even if use the
BitLocker Repair Tool.
You’d better contact your local data recovery center to try to get data back.
Tracy Cai
TechNet Community Support -
I have this doubt. I've just bought an external drive, especifically a Seagate GoFlex Desk 3 tb.
I want to know if it is recomendable to make a partion exclusively for time machine and let another one so I can put there music, photos, videos, etc that I should need to use or copy to another computer.
May half and half, 1.5 tb for time machine and 1.5 tb for data.
I have an internal hard drive of 500 GB (499.25 GB) in my macbook pro.
Any recommendation?As I said, yes. Be sure your Time Machine partition has at least 1 TB for backups.
1. Open Disk Utility in your Utilities folder.
2. After DU loads select your hard drive (this is the entry with the mfgr.'s ID and size) from the left side list. Click on the Partition tab in the DU main window.
3. Under the Volume Scheme heading set the number of partitions from the drop down menu to two (2). Click on the Options button, set the partition scheme to GUID then click on the OK button. Set the format type to Mac OS Extended (Journaled.) Click on the Partition button and wait until the process has completed. -
Using a partition to share files among multiple computers?
I have a small business with just a few employees and we currently share all of our business files which are hosted on my Macbook Pro. These files include an Aperture library, an iPhoto library, and a bunch of just plain old files, including a fairly large Filemaker database. I like having these files on my computer so that I can access them quickly (without lag), and this is particularly important for the database and the photo libraries. And since I am often working 'after hours' its easy for me to be on the move (even if its just a move to my couch!) and still have quick and easy access to the work files.
However, we're having some major problems with file permissions. Sometimes (and seemingly randomly) employees can't open files because they don't have permission' and often I need to reset the permissions on files my employees create in order to open them myself. iPhoto needs to "repair" the photo library just about any time a different user opens it up. It looks like we just lost a couple hours worth of work yyesterday tagging photos in iPhoto since they mysterously dissapeared when I opened iPhoto this morning.
So we need a solution that will get us around this permissions nightmare. I read one suggestion about putting iPhoto/Aperture libraries on an external HD and sharing them that way, however, totting a hard drive around might get akward, especially if I am working after hours, and I Don't know that it solves our issue with the rest of our work files. At this point I would be willing to invest in a Mac Mini server if that would solve some problems, but that would probably leave me with slower access to the files than I currently enjoy.
So the thought came to me, why don't I partition my hard drive and use half of it for all the work files? Could I set it up so permissions are ignored on that partitions as is reccommened for the external HD solution? Are there other solutions, or pitfalls with this solution that I'm not considering?ArisaemaDracontium wrote:
I'd like to use the method described in the following link. Can I do this with a partition instead of an external drive?:
Yes.
Can I partition my drive without whiping it entirely and then having to restore everything?
Probably. See #3 in Using Disk Utility.
Also, the "Alternative Method" mentions using a disk image. My experience with disk images is limited to those temporary "disks" used when installing new software that are erased when put in the trash or the computer restarts. Is there such thing as a perminent disk image?
Yes. You can create and keep them anywhere. They're kind of like a disk-within-a-disk. They have their own format, directories, etc. For example, some folks use encrypted disk images for their sensitive files.
There are a few considerations:
Only one user can access it at a time, so if you use Fast User Switching and one user has it open, others won't be able to use it.
You'll need to put it in each user's Login Items so it will be mounted at login, per the article.
The contents will not be backed-up while it's mounted.
(But, in many cases the iPhoto Library won't be backed-up by Time Machine while the iPhoto app is open, no matter where it is.) -
Hello, I would like to know how to delete the yellow partition named "Other" (in italian "Altro") that appears when I connect all my devices to the laptop. This partition in Ipad comes with 4 Gb that for an instrument of 16 seems a bit too also because it far exceeds anything else. So far not been able to find useful information and is a very common problem. Thank you.
Thank you very much, but it's not so simple. I read somewhere in this same forum that:
I quote: "That large of "other" is corrupt data. Try restoring your phone from backup first, followed by syncing your content back to the phone. If restoring from backup does not fix things, you will have to restore as a new device to get rid of it. There is no way to directly delete it, other than restoring your phone."
First of all it's terrible, (but we know that itunes is very far to work properly, it's a store) and how many times a day we should make a restoring? and if it does not have we to buy another device? I cannot believe that there aren't other solutions. -
Another hard drive swap question - re: 8GB partition for OSX in iMac 266
I have a Tangerine iMac 266 that I am setting up for a neighbor's son. The original 6GB hard drive was toast, so I swapped in an old 10GB drive that had previously been removed from an iMacDV 400. The 10GB "new" drive had OSX 10.3.1 and OS 9.1 on a single partition. I am aware that these early iMacs need OSX in a first partition of less than 8GB, so I expected that I would need to partition the "new" drive. However, while I was loading an install CD after powering up, the iMac booted fine into OSX, despite it NOT being located in a first partition of less than 8GB (and has continued operating well - booting multiple times in OS X and 9, surfing the net, etc...the only weirdness is a refusal of Word X to run).
I thought this was impossible, and in researching this I found that the Apple support site says that, for this computer, "Mac OS X must be installed on a volume that is contained entirely within the first 8 GB of its ATA hard disk." see http://docs.info.apple.com/article.html?artnum=106235
My Questions:
Is the 8GB limit only related to iINSTALLING OSX on these machines (and not RUNNING OSX)?
Will the machine run into problems later if a portion of the OS (i.e., during an update) gets installed outside of the first 8GB of the disk?
One of the log files says that the driver is out of date for the RageproC, and Classic applications that require more than 4MB of VRAM say that I don't have enough VRAM to run, yet the iMac has 6MB of VRAM (2 on MB and 4 in slot as listing by the system profiler) - do I need to (or should I) reinstall the system software (I already tried loading the latest ATI drivers, but it did't help)?
P.S. - to add more data points on the subject of RAM upgrades in these iMacs, my iMac 266 would not accept a 256MB PC-100 SO-DIMM that worked fine in an iBook and in the upper slot of a Rev. A iMac 233. Well, it accepted it, but only recognized the first 128MB.I believe Duane is correct. Even with Mac OS 9, you can run fine as long as all OS components used during startup are within the first 8GB of space. However, (even with Mac OS 9) as soon as something used during startup ends up beyond that limit, you will get open firmware or a gray screen at startup. The Mac OS X does not allow the installation target to exceed the limit as a preventative measure, not because it won't work at all.
The best "non-technical" explanation I have heard as to why, is this... The IDE hardware (and its driver) can only "see" the first 8GB of space during the initial start up process before the OS is loaded. Once start up completes, control is handed to the OS, which can see the entire drive. Therefore, apps have no problem running from beyond the limit. Only components needed before the hand-off is constrained to the 8GB limit.
FYI - On my iMac and 120GB drive, 7.78 GB (as shown in a Finder window) is the precise point where the Mac OS X Installer allows the volume to be the install target. "Precise" to with a few hundred MB's.
Maybe you are looking for
-
How can I know the security role of the logged in user
When you design an enterprise bean or Web component, you should always think about the kinds of users who will access the component. For example, an Account enterprise bean might be accessed by customers, bank tellers, and branch managers. Each of th
-
My imported songs into itunes match are not being uploaded but they are giving error messages
So when I am adding songs to my itunes library I am having issues with these songs being uploaded. I called apple support last time and they fixed the issue with me but it's started all over again. These songs have the little cloud with an exclamat
-
Hi all, I think I'm being really stupid but I'm having SQL query issues. I have a table with lots of lines of transactions which all refer to different stores, for example: Store..............._Sales_............._Month_ Reading............200k......
-
Dear All, I have an external hard disk on my Mac Server OS X 10.3.9 it is shared drive in the office. I have another external disk I want to share that too. Can anyone tell me please, how can I share it? I tried to find the setting the first drive I
-
I have created a page with 5 pictures.Four of them linking to one (same file) page. the fifth page I linked to another page that i created in Dreamweaver 8. This link doesnt work WHY????? the opther pages were all created in Fireworks 8 and exported