Oracle File structure
I would like to know if Oracle datafile is Operating System dependent.
Is it possible that the database is created on one Operating System for example Solaris, than the datafile is copied to another Operating System for example HP, and that the database would work on HP.
Thanks,
Milena
null
Copying all the OS related files of oracle and deploying them will never work directly 'coz even though init parameter of file locations is directly changeable, the control file storeage of datafile locations cannot be changed mannually.
What you can do is FTP the files from one platform to another and then start just the instance with mount. Re-creatte the control file giving the correct locations of all the files in the current platform. Once the control file is created restart it.
Similar Messages
-
Creation of oracle directory structure on mounted file system in linux
Hi,
I need to use datapump utility for which directory structure would be required. since my dump files are stored another system i usually mount the file system, can i create oracle directory structure on mounted filesystem in linux? do suggest urgently. thanks in advance.Yes you can why not
-
The file structure online redo log, archived redo log and standby redo log
I have read some Oracle documentation for file structure and settings in Data Guard environment. But I still have some doubts. What is the best file structure or settings in Oracle 10.2.0.4 on UNIX for a data guard environment with 4 primary databases and 4 physical standby databases. Based on Oracle documents, there are 3 redo logs. They are: online redo logs, archived redo logs and standby redo logs. The basic settings are:
1. Online redo logs --- This redo log must be on Primary database and logical standby database. But it is not necessary to be on physical standby database because physical standby is not open. It doesn't generate redo log. However, if don't set up online redo log on physical standby, when primary failover and switch standby as primary. How can standby perform without online redo logs? In my standby databases, online redo logs have been set up.
2. Archived redo logs --- It is obviously that primary database, logical and physical standby database all need to have this log file being set up. Primary use it to archive log files and ship to standby. Standby use it to receive data from archived log and apply to database.
3. Standby redo logs --- In the document, it says A standby redo log is similar to an online redo log, except that a standby redo log is used to store redo data received from another database. A standby redo log is required if you want to implement: The maximum protection and maximum availability levels of data protection and Real-time apply as well as Cascaded destinations. So it seems that this standby redo log only should be set up on standby database, not on primary database. Am my understanding correct? Because I review current redo log settings on my environment, I have found that Standby redo log directory and files have been set up on both primary and standby databases. I would like to get more information and education from experts. What is the best setting or structure on primary and standby database?FZheng:
Thanks for your input. It is clear that we need 3 type of redo logs on both databases. You answer my question.
But I have another one. In oracle ducument, it says If you have configured a standby redo log on one or more standby databases in the configuration, ensure the size of the current standby redo log file on each standby database exactly matches the size of the current online redo log file on the primary database. It says: At log switch time, if there are no available standby redo log files that match the size of the new current online redo log file on the primary database. The primary database will shut down
My current one data gurard envirnment setting is: On primary DB, online redo log group size is 512M and standby redo log group size is 500M. On the standby DB, online redo log group size is 500M and standby redo log group size is 750M.
This was setup by someone I don't know. Is this setting OK? or I should change Standby Redo Log on standby DB to 512M to exactly meatch with redo log size on primary?
Edited by: 853153 on Jun 22, 2011 9:42 AM -
Automated Message from Oracle Files
Good Morning:
When I set a new user account in Collaboration Suite, Oracle files generated a email notification like:
"(Automated message from Oracle Files)
An account has been created for you on Oracle Files
(http://middleserver:7779/files/app) with the following
User ID: user01
In order to enable Protocol Access in Oracle Files you must go to
http://middleserver:7779/files/app/ProtocolAccess and
enter your Single Sign-On Password.
Thank you for using Oracle Files. "
But I need to change this message because the port setted is 7778 instead of 7779.
Where I can change this default port for generate the correct notification?
Thanks a lot for your help.Hello,
I was wondering if we had the ability to handle this via the adapter?
Possible, but you need to develop adapter module for this.
It looks like a prologxmlepilog, but in your case, they are the ones sending. Where is the XML payload in the structure that you provided?
You can re-escape the characters using java mapping. For example, find %3C and replace with < . Here is the complete list of URL escape characters http://www.december.com/html/spec/esccodes.html
Hope this helps,
Mark -
File structure validation in PLSQL
Hello All,
I need few clarification about this if i want to validate the file structure how can i do in plsql, below is the example where my files looks like.
IIORRIDGE
VA1601800700028120000000008+0000123822092013
VA1601800700029040000000008+0000323822092013
VA1601800700030860000000008+0000123822092013
VA1601800767943220000000008+0000123822092013
end;
Please help me
MaheshI need few clarification about this if i want to validate the file structure how can i do in plsql
Foe example like this line
VA1601800700028120000000008+0000123822092013
There are totally 44 characters if its more than 44 i want to alert it and if it is less than 44 also i want to alert it
Like ablve line there will be more than 100000 lines will be der.
The problem of 'validate the file structure' implies that there could be issues with the structure of the file or the records.
And therein lies the problem if you try to use an external table since a table, or external table, is generally expected to have a known number of columns of known datatypes.
So first you need to document the business rules that you want to implement:
1. What are ALL of the validation rules that you need to check for?
a. what record delimiters are expected (e.g. LF, CRLF, etc)?
b. do you need to validate that the proper delimiters have been used?
c. how are the fields of a record delimited? Is it a delimited file (by commas or other?), fixed-width, etc.
d. what is the field delimiter (if delimited)?
e. what are the datatypes for each field of the record
f. can fields be NULL? If so, how is a null field indicated?
g. do all records have to have the same number of fields? Or is TRAILING NULLCOLS allowed?
2. How do you want to handle all of the errors that you detect?
a. reject the entire file if there is an error in ANY record?
b. reject only the records that have an error but accept the ones that do not have errors?
3. How do you plan to allow users access to the problem records?
a. give them access to the actual BAD file containing the raw data?
b. give them access to a DB table containing the raw data?
4. How do you plan to reprocess ONLY the bad data once it has been corrected?
a. reload the entire file?
b. reload a new file containing the fixed records?
5. How do you plan to allow users to access the validated, good data?
a. give them access to a DB table?
Sometimes the 'best' solution is to NOT load data into Oracle that isn't already validated.
You could write a standalone Java application to perform this 'file validation' and either produce 'clean' files that you need load into Oracle or the app could validate a file and load the clean data at the same time.
But until you document ALL of the requirements in terms of handling the 'dirty data' it is too early to be talking about the 'best' solution. -
Oracle Files max directory depth
Hi, we have users who have been uploading directory structures to Oracle Files using webdav. Some complained that when uploading directories with up to around 10-17 subdirectories depth, they got error messages. I'm unclear as to what the exact error message is but this indeed does happen. Does anyone happen to know what the max directory depth that Oracle Files allow?
This uploadings of large (and extended directory structures) subsequently created runaway webcache processes on the midtier. I don't know if these two issues have anything to do with running webdav via HTTPS.
Thanks in advance for your input.I'm search for the Ifs portlet. Could anybody provide me with a link to download this code?
Many thanks ahead!
P.S: Marcel I'm very interested in your own Ifs Portlet. You're code would be really helpfull. -
Using Adobe Bridge file structure with iPhoto (latest version)
I use Adobe Bridge and have all my pics in named folders in Pictures/PICS/Folder Names. Inside the PICS folder is the iPhoto Library (only a few pics in it). Is there any way I can use the file structure I have set up with Bridge and iPhoto (latest) simultaneously? I really dont want to import (copy) all my pics into IPhot because I am pretty sure I will end up with two versions of each. I havent been able to manage pics manually the way I like to in older versions of iPhoto.
Here's some info to help you setup Photoshop for use with iPhoto:
Using Photoshop or Photoshop Elements as Your Editor of Choice in iPhoto.
1 - select Photoshop or Photoshop Elememts as your editor of choice in iPhoto's General Preference Section's under the "Edit photo:" menu.
2 - double click on the thumbnail in iPhoto to open it in Photoshop. When you're finished editing click on the Save button. If you immediately get the JPEG Options window make your selection (Baseline standard seems to be the most compatible jpeg format) and click on the OK button. Your done.
3 - however, if you get the navigation window
that indicates that PS wants to save it as a PS formatted file. You'll need to either select JPEG from the menu and save (top image) or click on the desktop in the Navigation window (bottom image) and save it to the desktop for importing as a new photo.
This method will let iPhoto know that the photo has been editied and will update the thumbnail file to reflect the edit..
NOTE: With Photoshop Elements the Saving File preferences should be configured as shown:
I also suggest the Maximize PSD File Compatabilty be set to Always. In PSE’s General preference pane set the Color Picker to Apple as shown:
Note 1: screenshots are from PSE 10
Note: to switch between iPhoto and PS or PSE as the editor of choice Control (right)-click on the thumbnail and select either Edit in iPhoto or Edit in External Editor from the contextual menu. If you use iPhoto to edit more than PSE re-select iPhoto in the iPhoto General preference pane. Then iPhoto will be the default editor and you can use the contextual menu to select PSE for your editor when desired.
OT -
How to join 5 different tables using SQL to make it to a flat file structur
I am trying to load five differnt tables into one flat file structure table without cartesian product.
I have five different tables Jobplan, Jobtask(JT), Joblabor(JL), Jobmaterial(JM) and Jpsequence(JS) and the target table as has all the five tables as one table.
The data i have here is something like this.
jobplan = 1record
jobtask = 5 records
joblabor = 2 records
jobmaterial = 1 record
jpsequence = 3 records
The output has to be like this.
JPNUM DESCRIPTION LOCATION JT_JPNUM JT_TASK JL_JPNUM JL_labor JM_JPNUM JM_MATERIAL JS_JPNUM JS_SEQUENCE
1001 Test Jobplan USA NULL NULL NULL NULL NULL NULL NULL NULL
1001 Test Jobplan USA 1001 10 NULL NULL NULL NULL NULL NULL
1001 Test Jobplan USA 1001 20 NULL NULL NULL NULL NULL NULL
1001 Test Jobplan USA 1001 30 NULL NULL NULL NULL NULL NULL
1001 Test Jobplan USA 1001 40 NULL NULL NULL NULL NULL NULL
1001 Test Jobplan USA 1001 50 NULL NULL NULL NULL NULL NULL
1001 Test Jobplan USA NULL NULL 1001 Sam NULL NULL NULL NULL
1001 Test Jobplan USA NULL NULL 1001 Mike NULL NULL NULL NULL
1001 Test Jobplan USA NULL NULL NULL NULL 1001 Hammer NULL NULL
1001 Test Jobplan USA NULL NULL NULL NULL NULL NULL 1001 1
1001 Test Jobplan USA NULL NULL NULL NULL NULL NULL 1001 2
1001 Test Jobplan USA NULL NULL NULL NULL NULL NULL 1001 3
Please help me out with this issue.
Thanks,
Siva
Edited by: 931144 on Apr 30, 2012 11:35 AMHope below helps you
CREATE TABLE JOBPLAN
( JPNUM NUMBER,
DESCRIPTION VARCHAR2(100)
INSERT INTO JOBPLAN VALUES(1001,'Test Jobplan');
CREATE TABLE JOBTASK
( LOCATION VARCHAR2(10),
JT_JPNUM NUMBER,
JT_TASK NUMBER
INSERT INTO JOBTASK VALUES('USA',1001,10);
INSERT INTO JOBTASK VALUES('USA',1001,20);
INSERT INTO JOBTASK VALUES('USA',1001,30);
INSERT INTO JOBTASK VALUES('USA',1001,40);
INSERT INTO JOBTASK VALUES('USA',1001,50);
CREATE TABLE JOBLABOR
( JL_JPNUM NUMBER,
JL_LABOR VARCHAR2(10)
INSERT INTO JOBLABOR VALUES(1001,'Sam');
INSERT INTO JOBLABOR VALUES(1001,'Mike');
CREATE TABLE JOBMATERIAL
( JM_JPNUM NUMBER,
JM_MATERIAL VARCHAR2(10)
INSERT INTO JOBMATERIAL VALUES(1001,'Hammer');
CREATE TABLE JOBSEQUENCE
( JS_JPNUM NUMBER,
JS_SEQUENCE NUMBER
INSERT INTO JOBSEQUENCE VALUES(1001,1);
INSERT INTO JOBSEQUENCE VALUES(1001,2);
INSERT INTO JOBSEQUENCE VALUES(1001,3);
SELECT JP.JPNUM AS JPNUM ,
JP.DESCRIPTION AS DESCRIPTION ,
NULL AS LOCATION ,
NULL AS JT_JPNUM ,
NULL AS JT_TASK ,
NULL AS JL_JPNUM ,
NULL AS JL_labor ,
NULL AS JM_JPNUM ,
NULL AS JM_MATERIAL ,
NULL AS JS_JPNUM ,
NULL AS JS_SEQUENCE
FROM JOBPLAN JP
UNION ALL
SELECT JP.JPNUM AS JPNUM ,
JP.DESCRIPTION AS DESCRIPTION ,
JT.LOCATION AS LOCATION ,
JT.JT_JPNUM AS JT_JPNUM ,
JT.JT_TASK AS JT_TASK ,
NULL AS JL_JPNUM ,
NULL AS JL_labor ,
NULL AS JM_JPNUM ,
NULL AS JM_MATERIAL ,
NULL AS JS_JPNUM ,
NULL AS JS_SEQUENCE
FROM JOBPLAN JP, JOBTASK JT
UNION ALL
SELECT JP.JPNUM AS JPNUM ,
JP.DESCRIPTION AS DESCRIPTION ,
NULL AS LOCATION ,
NULL AS JT_JPNUM ,
NULL AS JT_TASK ,
JL.JL_JPNUM AS JL_JPNUM ,
JL.JL_labor AS JL_labor ,
NULL AS JM_JPNUM ,
NULL AS JM_MATERIAL ,
NULL AS JS_JPNUM ,
NULL AS JS_SEQUENCE
FROM JOBPLAN JP, JOBLABOR JL
UNION ALL
SELECT JP.JPNUM AS JPNUM ,
JP.DESCRIPTION AS DESCRIPTION ,
NULL AS LOCATION ,
NULL AS JT_JPNUM ,
NULL AS JT_TASK ,
NULL AS JL_JPNUM ,
NULL AS JL_labor ,
JM.JM_JPNUM AS JM_JPNUM ,
JM.JM_MATERIAL AS JM_MATERIAL ,
NULL AS JS_JPNUM ,
NULL AS JS_SEQUENCE
FROM JOBPLAN JP, JOBMATERIAL JM
UNION ALL
SELECT JP.JPNUM AS JPNUM ,
JP.DESCRIPTION AS DESCRIPTION ,
NULL AS LOCATION ,
NULL AS JT_JPNUM ,
NULL AS JT_TASK ,
NULL AS JL_JPNUM ,
NULL AS JL_labor ,
NULL AS JM_JPNUM ,
NULL AS JM_MATERIAL ,
JS.JS_JPNUM AS JS_JPNUM ,
JS.JS_SEQUENCE AS JS_SEQUENCE
FROM JOBPLAN JP, JOBSEQUENCE JS;
JPNUM DESCRIPTION LOCATION JT_JPNUM JT_TASK JL_JPNUM JL_LABOR JM_JPNUM JM_MATERIA JS_JPNUM JS_SEQUENCE
1001 Test Jobplan NULL NULL NULL NULL NULL NULL NULL NULL NULL
1001 Test Jobplan USA 1001 10 NULL NULL NULL NULL NULL NULL
1001 Test Jobplan USA 1001 20 NULL NULL NULL NULL NULL NULL
1001 Test Jobplan USA 1001 30 NULL NULL NULL NULL NULL NULL
1001 Test Jobplan USA 1001 40 NULL NULL NULL NULL NULL NULL
1001 Test Jobplan USA 1001 50 NULL NULL NULL NULL NULL NULL
1001 Test Jobplan NULL NULL NULL 1001 Sam NULL NULL NULL NULL
1001 Test Jobplan NULL NULL NULL 1001 Mike NULL NULL NULL NULL
1001 Test Jobplan NULL NULL NULL NULL NULL 1001 Hammer NULL NULL
1001 Test Jobplan NULL NULL NULL NULL NULL NULL NULL 1001 1
1001 Test Jobplan NULL NULL NULL NULL NULL NULL NULL 1001 2
1001 Test Jobplan NULL NULL NULL NULL NULL NULL NULL 1001 3
{code} -
Moving a Folder from the iTunes Folder to My Own File Structure
I'm trying to move a folder from the default iTunes folder to a folder on another drive where I am managing the file structure. This allows me to file music as I would like to on the disk an makes things much easier to find when I need to.
Recently I've noticed that when I do this I lose some of the song meta data such as the image and rating which is kind of annoying. Is there any way to do this and not lose meta data?If the external drive is in NTFS format (which is how most new drives are shipped), the Mac can't write to it. Reformat it with Disk Utility, putting it into Mac OS Extended format.
-
Imported par file to NWDS, jar file missing. Where find par file structure?
Hi All,
We want to implement idle session timeout on the portal. We found a wiki here on the subject. In order to implement the solution we need to make adjuestments to com.sap.portal.navigation.masthead.par.bak. When we import this par file to Netweaver Developer Studio, the libraries and jar files are missing. So obviously it errors then in portal as files are missing.
So I could add them manually but I don't know what the file structure is on this par file so don't where to add what. Does anyone know?
Also when I export the file after manually adding the missing files, will it export all the files or will it behave the same way as the import and leave out the additional files?
If anyone could help, I'd really appreciate it,
Kind regards,
Liz.Hi Bala,
I thought of that just before you responded and it worked!! Thanks for the response..
Regards,
Liz. -
Need help understanding Time Capsule file structure and how to get it back
I have the original Time Capsule on which I backup both my Mac Pro and my wife’s Macbook. When Snow Leopard came out, I successfully used the ‘Restore from Time Machine’ feature on my Mac Pro so I know it has worked. However, my wife’s MacBook harddrive died the other day and I was trying to do the ‘Restore from Time Machine’ and all it would find was a backup from April (when I put her in a new larger drive). Time Machine would not find any backup files newer that April. She stated that she had seen the the Time Machine backup notices regularly and as recent as the day the haddrive died (Nov. 23) so I figured that I should have no problem. Here is what I have found in my trouble shooting and what leads to my questions below.
This is the file structure I found: (note that our ID’s are ‘Denise’ and ‘John’)
*Time Capsule* (the drive as listed in my Finder window sidebar under ‘shared’)
>Folder called ‘Time Capsule’ (when logged in as either ‘Denise’ or ‘John’)
>>Denise Sparsebundle
>>>Backup of Denise’s iBook (mounted image)
>>>>Folder called ‘Backups.backupdb’
>>>>>Folder called ‘Denise’s iBook
>>>>>>Single folder with old April backup (not the right files)
>>John Sparsebundle
>>>Backup of John’s Mac Pro (mounted image)
>>>>Folder called ‘Backups.backupdb’
>>>>>Folder called ‘John’s Mac Pro’
>>>>>>Folders containing all my backup files
>Folder Called ‘Denise’ (if logged as ‘Denise’)
>>Denise’s Sparsebundle (a disk image)
>>>Backup of Denise iBook (the mounted image. Name from old machine)
>>>>Backups.Backupdb
>>>>>Denise’s iBook (Contains the backup I need)
>Folder Called ‘John’ (if logged in as ‘John’)
>> (empty)
For some reason, my wife’s backup files are stored within a folder located at the top level of the Time Capsule drive called ‘Denise’, however, mine are within a folder called ‘Time Capsule’ which is at the same level as the ‘Denise’ folder.
For some reason, when trying to use Time Machine to recover, it bypasses the top level ‘Denise’ folder which contains the correct files and goes into the ‘Time Capsule’ folder and finds the outdated backup files.
I would assume that both my backup files and my wife’s should be at the same level of the Time Capsule.
I was eventually able to use Migration Assistant to recover the files after installing a fresh OS and mounting the correct Sparsebundle image.
So, my question, how do I get this fixed so that if I have to recover a drive with Time Capsule, it will find the correct files.
Sorry for the long post and thanks in advance for any help.John Ormsby wrote:
What I was trying to determine is why different backups for one particular machine are located at different file structure levels on my Time Capsule and why the most resent one for that machine is at a different level that for my other machine. Also, what is the correct level that both machines backups should be at.
well John, first can you clarify if you are on 10.4.8 as your profile suggests ? if so, i'm wondering how you can use TM at all because TM was introduced with Leo. if you're not on 10.4.8, please update your profile. if you could, please also indicate details of your machine such as available RAM, processor, etc.
second, what OS is your wife's machine running ?
third, i frankly don't know too much about TM's file structure or if there indeed is such a thing as the correct one. however, i do know that it is best to leave TM to do its thing (as long as it's working). FWIW, though off-topic, you may want to have a look at this read for further information.
last but not least, i see TM backups to my TC only as a part of my backup strategy. the backbone of my strategy is to create bootable clone of my startup disk(s) using e.g. Carbon Copy Cloner on a regular basis. while TM is capable of doing a full system restore, i don't trust it as much as a working clone. i use TM to retrieve a file i accidentally trashed a week ago but you cannot boot from a TM backup if your startup disk goes belly up.
If I have missed this information in either the FAQs or the Troubleshooting article, I would greatly appreciate being pointed to it .
i expect you didn't miss anything and i'm sorry you didn't find any help there. perhaps if Pondini (author of the FAQ and troubleshooting user tips) reads this thread, you will get the necessary advice on the file structure and more besides.
good luck to you ! -
AVCHD File Structure From MAC to PC
Hello,
I know this is sort of off topic but I need the expertise of this forum....
I am the videographer (mac user) for a speaking group and have to film speeches 5-7 minutes in length each week. The problem is how to get the videos to each person without processing on my Mac and uploading to youtube so they can see them.
I am Using a Canon HF10 and wonder if I could pop out the memory card from the camera put it in a card reader attached to a person's PC and transfer the files that way. I know that I would copy the entire file structure to their computer. I assume the PC person would have to have something that could work with the AVCHD files but I don't know what is on a PC computer these days. The other dilemma is that the speakers HAVE ABSOLUTELY NO COMPUTER skills.
Anyone have any ideas how to get the video to a viewable state without going through FCE.
thanks,
AlI agree with Martin and might add that I was in a similar project late last year with several folks who certainly weren't as computer savvy as myself, and YouTube turned out to be the solution. That way I didn't have to deal with the perennial "I couldn't open it" comments you WILL receive. Everyone was able to view YouTube.
You might try uploading to Vimeo - I think you can actually upload .MTS files directly to their site, if you want to just post your raw footage. You could just get the files in the Finder directly from the file structure on the HF10, copy them to your Mac and upload from there. (I haven't tried this, but Vimeo's site says they accept .MTS files, which is what the HF10 videos will be, in the "STREAM" subfolder)
The downside is that their free account maxes out at 500MB of uploads per week. That may be enough for you but I don't know. If you pay, the limit is quite a bit higher. -
File Structure with iTunes on an External Drive
This is an image of the file structure on my external drive that holds all of my iTunes music:
Is this correct, or should I have set it up differently? I am asking since it is differnet the the iTunes file structure on my MacBook Pro and some files will get uploaded to my external drive and others will go onto my MBP,s iTunes Music folder. All of my iPhone/iPad apps, Album Artwork, Mobile Applications. I have Book folders on both drives and the both have books on them.
I should say that I had a major problem a few years ago, which has nothing to do with the above problem, with one of the iTunes updates which disconnected most of my songs from their respective files. I tried a solution a friend sugested, which really messed up the files and now many of the files have different info inside the song files, i.e., Otis Redding as the Artist on a Radiohead track. Or another groups cover for a Beatles album. I have a large library and have gone through slowly and fixed many things, but it is still quite messy and a huge bummer.. As a proud new owner of a Late 2011 MacBook Pro, I would love to be able to have a solution for this as well.
TIAWell I just went through the same thing. I had a Lacie Porsche mobile drive at 80GB and changed to a Lacie Rugged 250. I keep all my music on external never local. Not a fan of loading my laptop. Anyways.... I was getting those same errors. I had copied all the itunes library contents from the Porsche drive to the Rugged drive. It took quite some time to transfer so you knew it was working. Then changed in the preferences of iTunes to save in the Rugged drive. If you trashed or deleted any files from the new drive and just put them in the trash without emptying you get that 501 trashes path. I found I had multiple duplicates of songs with the name of the song followed by a 1.mp4a. I think the library.xml file is the key to a clean transfer of music. I think this file is stored locally in your user folder and then it's used to direct to the alternate drive but from what I've read it's the file with all the playlist information and stuff on it. If you find it and move it over to the mobile drive then hold option and boot up itunes then tell it to look for that file on the new mobile I think you'll have no worries. I'm actually gonna try this myself.
You don't want to know how it got it working. I physically opened every music file on the rugged drive to make sure the were all where they were supposed to be. All 70'gb's of music. Never doing that again. -
Can't install windows it says the boot camp partition is not formatted as a NTFS file structure
Using boot camp assistant it gets to the point of installing windows 7 and it won't because the boot camp partition is not a NTFS file structure. It also seems strange to me that there are 5 partitions would have expect ether 2 or 3. Please help this is very frustrating.
Thanks in advanceOpen, if not so already, the Windows formatter. Identify the BC Windows partition. It will be the one listed with the proper size you created and/or will be labeled as a C: drive. Be careful you select the right one or you may be corrupting the entire drive.
Format the partition as NTFS. -
Issue with Target File structure
Hi Experts ,
Mine is a proxy to file scenerio. please suggest me on the below req. My target file structure should be as follows.
HDR:PARTNER,CHDAT,NEW_TEL,OLD_TEL,PREF_CONTACT_NO,TEL,EXXX_ID
1029382,28.02.2011,0782191829,049829329,Y,3,0
1029382,28.02.2011,0783484311,077383738,N,2,0
1029382,28.02.2011,01972383934,0113938393,N,,0
1039385,28.02.2011,0782133829,079829310,Y,3,0
10245748,28.02.2011,N,,,,DAVID.WHITAKERattheYAHOO.CO.UK
112928393,28.02.2011,01183393843,01123839388,N,,Tom.HanksattherateSKYdotCOM
FTR:UPDATE_VENDOR_CONTACTS_SRM_TO_EIS,01.03.2011 02.18.29,6
This structure has 3 parts , header , body and trailer.
I am planning to create a target structure . Is it possible to get the field names of the structure in it
Header
HeaderInfo : HDR:PARTNER,CHDAT,NEW_TEL,OLD_TEL,PREF_CONTACT_NO,TEL,EMAIL_ID
Body
1029382,28.02.2011,0782191829,049829329,Y,3,0
1029382,28.02.2011,0783484311,077383738,N,2,0
1029382,28.02.2011,01972383934,0113938393,N,,0
Footer
Footer Info :FTR:UPDATE_VENDOR_CONTACTS_SRM_TO_EIS,01.03.2011 02.18.29,6
I will take care of the comma and the single line for header/footer and multiline for body in content conversion ,
In the footer I need date and time stamp along with total number of rows for the body.
How do I get the same .
Please suggest .
Arnab.Hi,
Yes you can get the timestamp and count of no of records in the footer.
Please define your target structure like below
MT_
Records
Header ( o to 1)
Headerdata
Items(0 to unbounded)
field1
fielld2
Footer
Footerdata
And in the mapping get the current dat using data function.
And then pass the Item node to the function count
Concat both of them along with the constants you need and assign to footer data..
Thanks
Suma
Maybe you are looking for
-
How do I recerence Movie Clips on the Main Timeline from inside a class?
Hey everyone, this might be a stupid question but I thought I'd ask cause it's making me nuts. I'm all of 2 days into AS3 (coming from not using Flash at all in YEARS) so feel free to consider me ignorant. I do have plenty of application development
-
How do i get my wifes iTunes account on my computer?
We are using the same laptop and my iphone/ipad are synced to it, When i sign out and she signs in it shows her signed in but only shows my albums and content, not hers. We dont have the old computer anymore as it bit the dust years ago. How can i
-
Printing Problem with Pro 8 and Reader
I have this problem where when I print to any printer, from Acrobat pro, or reader, any sort of graphics/image in the PDF doesn't come out properly. For example, most of the graphics/image part looks solid black, or blocky. I usually take the same PD
-
XSL and url encoded data issue
I have a large XML document, which has the data in it URLEncoded to escape the nasty chars. When I run it through the xalan XSL parser and feed the output to the browser in my servlet, the resultant html has the URLEncoded chars in it. For instance t
-
Unable to write more than 10 case statements in an object in designer
unable to write more than 10 case statements in an object in designer XI 3.0 and XI 3.1 Please let me know, any known issues.