10g Questions - Folder structure, Files, Post-migration Scripts

Hi, Oracle Pros
I have a few questions that I have encountered in my 10g installation process. I hope I could get some help from you.
1- After I have installed 10g (release 2) software AND Enterprise Manager (release 1), I have noticed the installation has arranged the folders under 'Oracle' differently in my Windows 2003 Server. For example, I have two 'oradata' folders; one (EM) is right under the root 'oracle'; the other one (10g) under 'product > 10g'.
We'd like to re-arrange those folders to make it consistent and make more sense. Would this re-arrangement of folders affect applying patches later? I assume the way the install arranges the folders is following the OFA. That's why I am afraid to re-arrange folders after the install.
2- Does one use p-file or sp-file to start a database? I understand one can create one from the other. Does using either make any difference? If so, how?
3- I was told that after the software is said and done, not only do we need to import our 8i datafiles, but also write some script for the user, grant, tablesspace to be migrated as well. Is writing script for those objects really necessary? Why can't the OUI take care of all that?
I'd appreciate any of your help.
Have a good day.
CJ

2- Does one use p-file or sp-file to start a
database? I understand one can create one from the
other. Does using either make any difference? If so,
how?Either one.
The issue with PFILE is that it must be available to the DBA running the STARTUP command - on that DBA's platform. If you have multiple DBAs and possibly permit remote administration, there is a danger that you will have several PFILEs that are not in sync.
Because of this, Oracle created the SPFILE concep. It ensures there is only one natural initialization file, and that file is under the control of the server. It can be backed up by RMAN, it can be used by RAC instances, etc.
After kicking and screamiung at the change, I have become quite happy using SPFILE only - except for those increasingly rare non-dynamic parameter changes.

Similar Messages

  • Trouble with Post Migration Scripts

    I try to run the volrepair script but am getting errors. I have run the Update NDS option from nssmu but still now luck.
    ccs_fs1:~ # /opt/novell/migration/sbin/serveridswap/scripts/repair/volrepair.rb -a admin.ccboe -p darabc -f /var/opt/novell/migration/
    Information: Checking NSS status ...
    Information: Getting GUID for NSS Pool ADMIN
    Information: Executing LC_ALL=en_US;ldapsearch -LLL -s sub -H "ldaps://192.168.0 .6" -x -D "admin.ccboe" "(&(CN=CCS_FS1_ADMIN_POOL)(objectClass=nssfsPool)) " "g uid" -W
    Error: Failed to get guid for pool ADMIN
    Information: Getting GUID for NSS Pool APPS
    Information: Executing LC_ALL=en_US;ldapsearch -LLL -s sub -H "ldaps://192.168.0 .6" -x -D "admin.ccboe" "(&(CN=CCS_FS1_APPS_POOL)(objectClass=nssfsPoo l))" "gu id" -W
    Error: Failed to get guid for pool APPS
    Information: Getting GUID for NSS Pool SYS
    Information: Executing LC_ALL=en_US;ldapsearch -LLL -s sub -H "ldaps://192.168.0 .6" -x -D "admin.ccboe" "(&(CN=CCS_FS1_SYS_POOL)(objectClass=nssfsPool ))" "gui d" -W
    Error: Failed to get guid for pool SYS
    Information: Getting GUID for Volume APPS
    Information: Executing LC_ALL=en_US;ldapsearch -LLL -s sub -H "ldaps://192.168.0 .6" -x -D "admin.ccboe" "(&(CN=CCS_FS1_APPS)(objectClass=Volume))" "guid" -W
    Error: Failed to get guid for Volume APPS
    Information: Getting GUID for Volume SYS
    Information: Executing LC_ALL=en_US;ldapsearch -LLL -s sub -H "ldaps://192.168.0 .6" -x -D "admin.ccboe" "(&(CN=CCS_FS1_SYS)(objectClass=Volume))" "guid" -W
    Error: Failed to get guid for Volume SYS
    Information: Updated NSS Pool objects successfully
    Information: MPVGUID v1.1 - Modify Pool or Volume GUID
    List is Empty, there are no Volumes
    GUIDModify failed, status -1
    Failed to Parse the XML from Buffer
    Information: Updated NSS volume objects successfully
    Information: MPVGUID v1.1 - Modify Pool or Volume GUID
    List is Empty, there are no Volumes
    GUIDModify failed, status -1
    Failed to Parse the XML from Buffer
    Warning: The following NSS Pools does not exist on the source server. Use "nssmu " option "Update NDS" for these Pools to repair.
    Warning: ADMIN
    Warning: APPS
    Warning: SYS
    Warning: The following NSS Volumes does not exist on the source server. Use "nss mu" option "Update NDS" for these Volumes to repair.
    Warning: APPS
    Warning: SYS

    $(UYou need to have the same set of pools and volumes on the source server and target server (atleast the names should be identical).
    The error says the pools or volumes that exists on the target server doesn't match with those on the source server.
    Try updating them from nssmu or iManager.
    Refer section "9.0 Preparing for Transfer ID -> 9.1. Prerequisites " of http://www.novell.com/documentation/....html#bookinfo
    -Ramesh
    >>>
    From:
    dmilam<[email protected]>
    To:
    novell.support.open-enterprise-server.migration
    Date:
    05/03/2010 11:06 PM
    Subject:
    Trouble with Post Migration Scripts
    I try to run the volrepair script but am getting errors. I have run the
    Update NDS option from nssmu but still now luck.
    ccs_fs1:~ #
    /opt/novell/migration/sbin/serveridswap/scripts/repair/volrepair.rb -a
    admin.ccboe -p darabc -f /var/opt/novell/migration/
    Information: Checking NSS status ...
    Information: Getting GUID for NSS Pool ADMIN
    Information: Executing LC_ALL=en_US;ldapsearch -LLL -s sub -H
    "ldaps://192.168.0 .6" -x -D "admin.ccboe"
    "(&(CN=CCS_FS1_ADMIN_POOL)(objectClass=nssfsPool)) " "g uid" -W
    Error: Failed to get guid for pool ADMIN
    Information: Getting GUID for NSS Pool APPS
    Information: Executing LC_ALL=en_US;ldapsearch -LLL -s sub -H
    "ldaps://192.168.0 ..6" -x -D "admin.ccboe"
    "(&(CN=CCS_FS1_APPS_POOL)(objectClass=nssfsPoo l))" "gu id" -W
    Error: Failed to get guid for pool APPS
    Information: Getting GUID for NSS Pool SYS
    Information: Executing LC_ALL=en_US;ldapsearch -LLL -s sub -H
    "ldaps://192.168.0 .6" -x -D "admin.ccboe"
    "(&(CN=CCS_FS1_SYS_POOL)(objectClass=nssfsPool ))" "gui d" -W
    Error: Failed to get guid for pool SYS
    Information: Getting GUID for Volume APPS
    Information: Executing LC_ALL=en_US;ldapsearch -LLL -s sub -H
    "ldaps://192.168.0 .6" -x -D "admin.ccboe"
    "(&(CN=CCS_FS1_APPS)(objectClass=Volume))" "guid" -W
    Error: Failed to get guid for Volume APPS
    Information: Getting GUID for Volume SYS
    Information: Executing LC_ALL=en_US;ldapsearch -LLL -s sub -H
    "ldaps://192.168.0 .6" -x -D "admin.ccboe"
    "(&(CN=CCS_FS1_SYS)(objectClass=Volume))" "guid" -W
    Error: Failed to get guid for Volume SYS
    Information: Updated NSS Pool objects successfully
    Information: MPVGUID v1.1 - Modify Pool or Volume GUID
    List is Empty, there are no Volumes
    GUIDModify failed, status -1
    Failed to Parse the XML from Buffer
    Information: Updated NSS volume objects successfully
    Information: MPVGUID v1.1 - Modify Pool or Volume GUID
    List is Empty, there are no Volumes
    GUIDModify failed, status -1
    Failed to Parse the XML from Buffer
    Warning:The following NSS Pools does not exist on the source server.
    Use "nssmu " option "Update NDS" for these Pools to repair.
    Warning: ADMIN
    Warning: APPS
    Warning: SYS
    Warning: The following NSS Volumes does not exist on the source server.
    Use "nss mu" option "Update NDS" for these Volumes to repair.
    Warning: APPS
    Warning: SYS
    dmilam
    dmilam's Profile: http://forums.novell.com/member.php?userid=11991
    View this thread: http://forums.novell.com/showthread.php?t=409414

  • I would like to combine multiple profiles into a new computer. Which are the most important folder or files to migrate?

    The user profile in my old computer somehow became corrupted and I can no longer access it to log on. Unfortunately I was not aware of the Firefox sync feature at the time. I used a new profile and had to start from scratch to save new bookmarks and extensions/plugins. I did set up Firefox sync for this new profile. Now I have a brand new computer and desire to migrate the bookmarks to this new computer, including the original bookmarks in the old computer, as well as the bookmarks in the second profile. While I have located the profiles folder for each profile in my old computer (as well as extensions and plugin folders), the content of this folder is daunting. I would like to know which are the minimal folders I should select to migrate and would it be safe to simply copy and paste over the entire folder? Please note that I have also added new bookmarks into my new computer which I would not like erased. I hope these details are not convoluted.

    Firefox has two methods of bringing in bookmarks from another Firefox profile:
    * Restore a backup (JSON files in the bookmarkbackups folder): this completely replaces what you have, so is not the first step
    * Import from an HTML file: this is additive and will create an Imported Bookmarks folder or add them under Unsorted Bookmarks
    Restore is best if you tag your bookmarks because the JSON format preserves tags. If tags are not important, then it's easier.
    What I suggest is:
    (1) Export your current bookmarks to the HTML format; you can reimport these later. See: [[Export Firefox bookmarks to an HTML file to back up or transfer bookmarks]]. Open the file in Firefox (it's a web page) to make sure it looks complete before proceed.
    Next, get the bookmarkbackups folders from each of your old profiles that you want to restore, and look for the most-complete-looking JSON files (they will have the most recent dates and largest number of bookmarks indicated in the file name). Proceed from the smallest set to the largest, or oldest to newest; basically, restore the most important one last.
    (2) Restore a backup, then export to HTML. This article has the steps for a restore: [[Restore bookmarks from backup or move them to another computer]]. Then export to HTML and check for completeness.
    (3) Repeat #2 until you get to the last/most important restore. You don't need to export that one.
    (4) Import the HTML files you've made throughout the process.
    (5) In the Library dialog, reorganize. Firefox doesn't have a deduplicator, but you might find one or more extensions to do the job on the Add-ons site if you find a lot of overlap.
    Make sense?

  • Question on CAPolicy.inf file and post-installation script

    I'm preparing a small PKI implementation with a single Enterprise Root CA on Windows 2008 R2 Enterprise.
    The primary role of this CA is to provide certificates for about 20 laptops that will use the certificates for authentication to a wireless network.
    I have prepared a CAPolicy.inf file and a post installation script (below).
    Renewal period for the root cert should be 10 years, CRL publication every 2 days with Delta publication every 12 hours (details in scripts below).
    I want to make sure the AIA and CRL url commands are correct.
    Does this look correct?
    AIA
    1:%WINDIR%\System32\CertSrv\CertEnroll\%%1_%%3%%4.crt
    This should publish the CA certificate to the local file system "certenroll".
    2:ldap:///CN=%%7,CN=AIA,CN=Public Key Services,CN=Services,%%6%%11
    This places the LDAP url in the AIA extension of issued certs.
    I am not planning to use HTTP, hence its absence.
    CRL
    1:%WINDIR%\System32\CertSrv\CertEnroll\%%3%%8%%9.crl
    This publishes the CRL to the local file system ("certenroll" subfolder).
    10:ldap:///CN=%%7%%8,CN=%%2,CN=CDP,CN=Public Key Services,CN=Services,%%6%%10
    Indicates CDP in AD DS and includes CDP url in issued certificates.
    Complete scripts
    1. CAPolicy.inf - %windir%
    [Version]
    Signature= "$Windows NT$"
    [certsrv_server]
    renewalkeylength=2048
    RenewalValidityPeriodUnits=10
    RenewalValidityPeriod=years
    CRLPeriod = days
    CRLPeriodUnits = 2
    CRLDeltaPeriod = hours
    CRLDeltaPeriodUnits = 12
    LoadDefaultTemplates=0
    2. Install Role
    Follow steps in GUI here
    3. Run post-install script
    certutil -setreg CA\DSConfigDN CN=Configuration,DC=mydomain,DC=local
    certutil -setreg CA\CRLPeriodUnits 2
    certutil -setreg CA\CRLPeriod "days"
    certutil -setreg CA\CRLDeltaPeriodUnits 12
    certutil -setreg CA\CRLDeltaPeriod "hours"
    certutil -setreg CA\ValidityPeriodUnits 10
    certutil -setreg CA\ValidityPeriod "Years"
    certutil –setreg CA\CACertPublicationURLs "1:%WINDIR%\System32\CertSrv\CertEnroll\%%1_%%3%%4.crt\n2:ldap:///CN=%%7,CN=AIA,CN=Public Key Services,CN=Services,%%6%%11"
    certutil –setreg CA\CRLPublicationURLs "1:%WINDIR%\System32\CertSrv\CertEnroll\%%3%%8%%9.crl\n10:ldap:///CN=%%7%%8,CN=%%2,CN=CDP,CN=Public Key Services,CN=Services,%%6%%10"
    certutil -setreg CA\csp\DiscreteSignatureAlgorithm 1
    certutil -setreg CA\AuditFilter 127
    net stop certsvc & net start certsvc
    certutil -crl
    Please mark as helpful if you find my contribution useful or as an answer if it does answer your question. That will encourage me - and others - to take time out to help you.

    A couple of things (just a quick glance)
    1) CAPolicy.inf
    Everything looks fine in this file. You will have a 2 day base crl and 12 hour delta CRLs. No default templates are loaded, so you must manually designate all published certificates
    2) Post install Script
    a. Best practices are to use HTTP only, not to use LDAP, so your whole design does not follow best practices. Furthermore, if you issue a certificate to a single, non-domain joined machine, they will be unable to evaluate your certificates. If you decide
    to do VPN (or use an internal SSL cert on an externally accessible Web server), revocation checking will fail because you do not publish AD to the external world
    b. Will you be issuing 10 year certificates from the CA. These lines:
    certutil -setreg CA\ValidityPeriodUnits 10
    certutil -setreg CA\ValidityPeriod "Years"
    define the maximum validity period of certs issued *by* the CA. For example, if a certificate template states 4 years, then a four year will be issued (less than 10 years). If a template state 12 years, then the max age will be 10 years. Think of it as a
    governor.
    c.CACertPublicationURLs . This should include an HTTP URL, and if you want to still include LDAP, HTTP should be prior to the LDAP URL.
    d. CRLPublicationURLs. This should include an HTTP URL. If you want to still include LDAP, HTTP should be prior to the LDAP URL. Your checkbox value for the LDAP URL is incorrect as well. For a CA that issues both base and delta CRLs, you must have a value
    of 79 (all check boxes enabled). Your current configuration would fail for revocation checking because you do not publish the base or delta CRL to AD. There is another check box missing (I believe the one to include the URL in the base CRL's freshest CRL extension
    (how to find the delta CRL).
    e. Do not include the DiscreteSignatureAlgorithm line in the script. If you have any Windows XP clients, they will not be able to use the certs (BTW, this should have been AlternateSignatureAlgorithm in my book - where you sourced your scripts). This would
    only be used on a CA where you used something like Elliptical Curve as the assymetric algorithm and SHA384 as the signature algorithm).
    f. The restart line should be net stop certsvc && net start certsvc. The double && ensures that the service comes to a complete stop before the start command is executed.
    HTH,
    Brian

  • Copy folder with files in structured format

    All Experts,
    I am new to indesign and javascripting. i need to copy the folder with files (structured folder) from one location to another location with the same folder name. i tried the file copy file.copy(), this is copying only particular file, i need to copy the folder with structured format. ie folder, if any sub folder...
    Thanks,
    sag

    Sorry for the delay in response.
    I assume both sagkrish and Mac_06 are either the same person or working together?
    When you said:
    when we execute your code it's showing error "cannot compile the script".
    were you trying after fixing the missing double-quote (my post of 1:42am)?
    If not, maybe one of the Windows folks could better advise. The official Microsoft help for XCOPY is http://www.microsoft.com/resources/documentation/windows/xp/all/proddocs/en-us/xcopy.mspx? mfr=true that might be differently helpful than Jongware's link, which is good, too.
    I can't quite tell if you're having a problem if you run the XCOPY command from the CMD.EXE Windows Command Prompt. If you are, well, it's not really an InDesign scripting question, but maybe we can help you anyhow. But you'll need to give us a lot more specifics on what is breaking. Does it work with other directories? Are there network filesystems involved? Are the source and destination directories outside of the running user's home folder?
    If you're having problems overwriting files, you might try adding /R.
    (Does anybody else want to test it?)

  • Moving original files when you use your own folder structure

    I currently maintain my own folder structure (on the HDD) for Aperture to read files from (rather than allow it to create its own complicated structure). Based on what i have seen so far, I understand that Aperture links its project with the HDD folder from where you import images.
    In an instance, where I moved my file from the HDD folder structure, the image still appears in Aperture, but with a yellow question mark, and it does not allow me to edit it. Here I am assuming that since i moved the file from its HDD folder, Aperture is no longer able to locate it. Hence, I have the following queries about moving/ deleting files from my HDD:
    Q1: If I want to move a file from one HDD folder to another, how do I do so without losing its link in Aperture
    Q2: How do I completely delete a file from my HDD using Aperture? Assume here that I have imported it into Aperture but haven't made any changes to it, which means that I have only one version, the master version. Currently, when I delete it in Aperture using 'Delete version', I notice that the file still exists in the HDD folder). Is it that the only way to delete a file is to do so first from Aperture and then use Finder to delete it from the HDD folder? If not, whats the best way.
    Q3: There is another scenario for Q2 - If I have made changes to a file after importing it in Aperture, there are now two versions, master version and edited version. If I want to delete both these files, whats the best way to do so.

    OK let's take a step back here.  In your previous post you wrote asking how to move a file from one folder on your HD to another so that Aperture wouldn't loose track of it. And for that we answered to use File->Relocate Masters. The last two questions concerned deleting images from both Aperture and the HD. And we covered that.
    Now you write:
    f i want to move my master and version files from one project to another (and the corresponding master files too), i should first move the master files using File> Relocate master and then drag-drop the versions from within Aperture. (I usually maintain similar names for my HDD folders and corresponding projects in Aperture, hence if i move the master, then it makes sense to move the versions too).
    Now you're bringing in  moving images in Aperture from one project to another.  What you are saying is correct but bear in mind:
    The library structure inside Aperture is totally independent of the file structure you use to store referenced masters on the HD outside of Aperture.
    Now It is possible to have one mimic the other as you seem to be doing but for all intents and purposes Aperture doesn't care at all how the external file structure appears as long as Aperture knows were the masters are.
    So if you want to move masters INSIDE of Aperture from one project to another you can simply drag the visible image from the source project to the destination project. All versions will come along, versions are coupled to their masters and as such can't be moved independently. Any versions in Albums will continue to point to the master they came from.
    Now if you want to move the master files EXTERNAL to Aperture you need to use the File->Relocate Master command so that Aperture will know were the files are.
    Note: there is another command File->Locate Referenced Files that is used to connect Aperture to its referenced masters if the connection between the two is somehow broken. As for example what would happen if you moved the referenced masters using the finder rather then the Aperture command.
    So you should be good to go, just remember that the two structures, Aperture library and referenced files, are independent and don't really need to be kept in sync. And by doing this you're  making extra work for yourself that isn't really necessary.
    regards

  • Script to copy data to new folder structure

    I am working on a project where I need to migrate many TB's of data into a new folder structure.
    I have exported the one of the folders including all sub-folder with treesize and exported it to excel.
    So the source folder structure is in column A, which has several thousands lines and I have put the destination folders in column B.
    Now I would like to script the data copy for every line the source folder is listed in column A and the destination folder is listed in column B.
    This has to be done for all containing sub folders and files of course.
    Has anyone got an idea how the script should look like ?
    I can export the excel sheet to CSV, but then my knowledge stops unfortunately.

    Win32 Device Namespaces
    The "\\.\" prefix will access the Win32 device namespace instead of the Win32 file        namespace. This is how access to physical disks and volumes is accomplished directly, without going through the       
    file system, if the API supports this type of access.  You can access many devices other than disks this way        (using the
    CreateFile and        
    DefineDosDevice functions, for        example).
    https://msdn.microsoft.com/en-us/library/windows/desktop/aa365247(v=vs.85).aspx
    ¯\_(ツ)_/¯

  • Migrate data to new folder structure

    I need to migrate a lot of data to a new folder structure, one of the things I will have to deal with is file paths which are too long.
    To copy the data I have made a powershell script which uses robocopy to copy all data.
    Of course I do not want to have disruption in this process (or the least disruption possible), what can I do to prevent issues with long file paths ?
    Is there an easy way to modify my script and detect issues with long file paths ?
    What would be the way to go preventing many copy errors during the robocopy action and fix possible issues before starting ?

    There are utilities out that (free / not free) that you can run and it will give you a report on which folders are too long.  What I did was, I would take the results in the report and email it to an admin assistant or some type of key person in each
    department, and they would fix the problem.  Once I had them all done, then I would do the migration.

  • Transparent integration of a new drive into the folder structure of a file server?

    Hallo,
    I'm looking for solution to a problem, that seams to be simple, but obviously is not that easy to solve:
    I have to integrate a second external RAID system in to our MacOS X 10.6.8 Server based file sharing server that is accessed from Mac and Windows clients via AFP and SMB. I want to move one or two main folders from the old RAID to the new one, but this physical change has to be invisible to the users, as I don't want to confuse anyone.
    So my idea was to mover the Folders to the new drive, and create symbolic links at the old locations pointing to the new locations. This works very well for the Windows clients accessing the server via SMB, but does not work for the Macs on AFP.
    So I tried to move one folder to the new drive and mount it at the old location. Again this works fine for the SMB but does not work for AFP, the mount point does not show up in a directory listing via AFP.
    Do you have any other idea how I might integrate the new drive in transparent way into the old folder structure?
    Thanks for your input
    Florian
    PS: I could use SMB on the Macs but for some reason I, whenever I try to log into the server from a Mac via SMB, user name and password are accepted, but then the Mac client displays a message saying that I don't have the right to access the share. The same share works using AFP.

    Define 'invisible' please.
    Do you mean you need to be able to do this live, while users are on the system? or just that you can shut the machine down, reconfigure it, and bring it back up with the new configuration, even though the shares look the same to the users?
    I'm guessing the latter, but it's worth asking.
    Ultimately the problem lies in the way the file sharing systems deal with multi-volume sharepoints, and it's not easy. If you think about it, say you have a 1TB array handling the main sharepoint and you want to substitute one of the directories in that sharepoint with a new, empty 4TB array.
    When the user mounts the sharepoint they get a little status bar at the bottom of the window showing the available space... how much is that? Well, initially it would be however much of the 1TB volume is unused... except if they switch to the linked directory they now have 4TB available (or thereabouts)... so the amount of space available has changed even though, from the user's standpoint, they're still on the same share.
    It isn't valid for the OS to report 4 TB as free because that isn't the case unless you're in this specific directory.
    It also isn't valid for the OS to report 5TB free, even though there is that amount of space altogether.
    It also isn't valid for the OS to report 1TB free because you could upload 4TB of data if you put it in the right place.
    There are few solutions to this. Microsoft sort of addressed this with their DFS solution in Windows Server, but it's not trivial.
    Unfortunately you can't just blow it off and ignore the issue.
    Off hand there's only one thing I can think of that *might* work. if it doesn't then you're down to using multiple sharepoints on the server, with users mounting both disks simultaneously (which can be automated, for what it's worth).
    The one thing to try is to statically mount the second RAID at the appropriate location so that, as far as the OS is concerned, it looks like just another directory even though it's on a different disk. You'd do this by editing /etc/fstab and adding a line like
    UUID=AABBCCDD-79F7-33FF-BE85-41DFABE2E2BA /path/to/mount    hfs   rw
    to /etc/fstab (this file may not currently exist, so just create it as root).
    The first field is the UUID of the drive (which you can get via diskutil info)
    The second field is the path where you want this drive to appear in the filesystem - i.e. somewhere in the path of your sharepoint. There must be an existing (empty) directory at this path when the disk mounts.
    The third field identifies the disk as HFS
    The fourth field marks the disk as read-write
    Now when the system boots it should locate this disk and mount it on top of the existing RAID volume. If you cd to your sharepoint you should see the existing drive and if you cd from there into your mounted directory you should be looking at your new RAID.
    Now, this all works fine (or, at least, should do) from a standard OS standpoint. The big question is whether the AFP and SMB daemons honor and support this kind of setup... there's one way to try, of course...
    Now for testing purposes you could mount it at a dummy directory, just to see whether it's available to network clients. If it is then your next step would be to clone the data from the directory onto the new drive, then edit the fstab to mount the disk at the appropriate location.
    Note also that the entire volume will replace the directory you mount over - that means you can't replace two (or more) directories with one volume, but you can, of course, partition or setup your RAID into multiple volumes and mount each individual volume over a specific directory.

  • [svn:osmf:] 10584: Adding a missing file from that last folder structure refactor.

    Revision: 10584
    Author:   [email protected]
    Date:     2009-09-24 17:19:48 -0700 (Thu, 24 Sep 2009)
    Log Message:
    Adding a missing file from that last folder structure refactor.
    Added Paths:
        osmf/trunk/plugins/akamai/AkamaiBasicStreamingPlugin/AkamaiBasicStreamingPlugin.as

    Hello,
    Welcome to MSDN forum.
    I am afraid that the issue is out of support range of VS General Question forum which mainly discusses
    the usage of Visual Studio IDE such as WPF & SL designer, Visual Studio Guidance Automation Toolkit, Developer Documentation and Help System
    and Visual Studio Editor.
    Because your issue is about ASP.NET web application development, I suggest that you can consult your issue on ASP.NET forum:
    http://forums.asp.net/
     for better solution and support.
    Best regards,
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click
    HERE to participate the survey.

  • [svn:osmf:] 16197: Config file changes to reflect the folder structure changes of libs and plugins

    Revision: 16197
    Revision: 16197
    Author:   [email protected]
    Date:     2010-05-18 15:07:01 -0700 (Tue, 18 May 2010)
    Log Message:
    Config file changes to reflect the folder structure changes of libs and plugins
    Modified Paths:
        osmf/trunk/apps/samples/framework/DynamicStreamingSample/DynamicStreamingSample-build-con fig.xml
        osmf/trunk/apps/samples/plugins/AkamaiPluginSample/AkamaiPluginSample-build-config.xml
        osmf/trunk/framework/OSMFAIRTest/OSMFAIRTest-build-config.xml
        osmf/trunk/framework/OSMFIntegrationTest/osmfintegrationtest-build-config.xml
        osmf/trunk/framework/OSMFTest/OSMFTest.mxml
        osmf/trunk/framework/OSMFTest/osmftest-build-config.flex
        osmf/trunk/framework/OSMFTest/osmftest-build-config.flexcov
        osmf/trunk/framework/OSMFTest/osmftest-build-config.xml

    Grant,
    It appears that in the past few days you have not received a response to your
    posting. That concerns us, and has triggered this automated reply.
    Has your problem been resolved? If not, you might try one of the following options:
    - Visit http://support.novell.com and search the knowledgebase and/or check all
    the other self support options and support programs available.
    - You could also try posting your message again. Make sure it is posted in the
    correct newsgroup. (http://forums.novell.com)
    Be sure to read the forum FAQ about what to expect in the way of responses:
    http://forums.novell.com/faq.php
    If this is a reply to a duplicate posting, please ignore and accept our apologies
    and rest assured we will issue a stern reprimand to our posting bot.
    Good luck!
    Your Novell Product Support Forums Team
    http://forums.novell.com/

  • How to rescue MTS files with no AVCHD folder structure?

    Occasionally friends ask me for help with  AVCHD cameras but as I have not yet got one I have to rely on what I read on the forums.
    I have experimented with AVCHD clips  from the DVD accompanying Tom's FCE 4 Editing Workshop book and have a couple of questions.
    1. From my tests it appears that the whole folder structure is not necessary. Just having the BDMV folder and all its contents appears to be sufficient for FCE 4 to recognise and convert the files. Is this correct?
    2. People who accidentally destroy the folder structure and just save the MTS files find that FCE cannot recognise them.
    I have found that RevolverHD and VLC can both play the basic MTS but only VLC can transcode them and that is only to a limited number of codecs. However, once transcoded the files could then be put into Streamclip and converted to AIC.
    Rather a complex and potentially degrading process.
    So what other apps would be better to use for converting MTS to AIC or ProRes?

    Let's sort through a few things. Yes, only the BDMV folder is needed, but you need the whole folder, in which case you may as well have all the metadata and take the whole folder structure for archiving.
    1440x1080 does not need to be converted to 1920x1080. 1440x1080 is anamorphic AVCHD and is included in the AVCHD specification. In fact there is a 1440x1080 preset in FCE and that's all you have to use. The media is anamorphic, and it displays in normal 16:9 widescreen as 1920x1080.
    The same clip when simply rewrapped as an H.264 changed from 23.5MB to 31.4MB.
    This appears to be because it was automatically given extra pixels to stretch it out to 1920x1080.
    The file size difference is not the pixel size. If the media is 1440x1080 H.264 mts rewrapped to QuickTime the file will be larger because the QT file structure. It will still be H.264 1440x1080.

  • Use Automator to move files and folder structure to another folder, retaining destination contents

    I have been struggling trying to setup Automator to move files and folder structure to another folder, retaining the destination contents.  Basically, I need to add files at the destination, within the same folder structure that exists at the source.  Here's some details about the scenario:
    -I have PDF files that I create on a seperate computer than I my daily use machine
    -For security reasons, the source computer doesn't have access to any shares on the destination computer
    -The destination computer has access to shares on the source computer
    -I want to delete the original PDFs at the source after they are moved or copied
    I haven't been able to get Automator to move or copy the folder contents (files and subfolders) without dropping everything copied at the top level of the destination, resulting in many duplicate folders and a broken folder structure.
    So far I've only had luck getting this to work at the command line, but I'd really like to have this setup in Automator so that I could have either a service or application that I could use for any folder, prompting for the source and destination folders.  I'm a relatively new Mac user with limited Linux experience, so this is the command that I've cobbled together and currently accomplishes what I'm looking for:
    ditto /Volumes/SMB_Temp/SOURCE ~/Desktop/Documents/DESTINATION
    cd /Volumes/SMB_Temp/SOURCE
    find . -type f -name "*.pdf" -exec rm -f {} \;
    Thanks for any ideas!

    If you have a command-line syntax that works, why not just create an Automator workflow with a single 'Utilities -> Run Shell Script' action, where that action has your (working) shell commands?
    Seems way, way simpler to me than trying to reinvent the wheel and transcribe shell commands into individual Automator actions

  • MTS files (and really, they're just MTS, not the entire folder structure)..

    OK, I see lotsa advice about dealing with MTS files...if they're in their proper folder structure.
    Well, that'd be easy, and I'd be importing them now if they were in their proper folder structure. But they're not...I have 503 .mts files, in a folder, that a client copied "for me", and now expects me to import.
    Any advice? Like I said, I only have the .mts files, not the supporting folder structure.

    Greetings all from Jakarta - i have a feeling this is going to be a lengthy response and will only be of interest for those currently suffering from MTS file issues, but I do believe (while my solutions are not perfect) I have found some answers. What I have not found, unfortunately, are the hairs that I pulled out of my armpits in frustration over these MTS issues.
    Background info: My office purchased the Panasonic HDC-TM700 to provide back-up to our Panasonic 170EN and we have been quite happy with the results. We have also been converting the MTS files to .MOV files on both Toast Titanium, and Final Cut Pro.
    SITUATION #1
    After a day of shooting I downloaded the TM700 files (full structure) to the hard disk. On day two after a full day of shooting I realized I forgot to format the camera's memory card so i had 50 MTS files that I already downloaded. Ever the smart guy, I highlighted file #51-#102 and dragged them into the STREAM file of the previous day's download. Then I formated the camera so i wouldnt forget the next day.
    Well during log and transfer in FCP ONLY THE FIRST 50 FILES could be read, because that is what the metadata in the info supported ..... FCP couldnt even locate the other 50 files. I nearly fainted at the thought of getting fired. Thank goodness, when we dragged the MTS files into TOAST TITANIUM they could be read and converted!
    SITUATION #2
    The new guy (seriously, it was the new guy!) ONLY DOWNLOADED THE MTS files and since I wasn't there I wasn't there to see the havoc when neither FCP or TOAST could open the files. I was called in and my first idea was to take an old file structure, copy it, and slip the MTS files into the STREAM folder.
    Well, it worked!! kind of.... The interesting thing is that the files could be converted in TOAST (FCP would not open the files) BUT ONLY TO THE TIME CODE THAT was equivalent to the original file. So if the original file was 1 minute and the file that I only had the MTS file for (and which corresponded to the file's number) was 5 minutes, only 1 minute was converted.)
    So what I did was I took the TM700 and I shot five minutes of my foot. Then I downloaded the full file structure. Since I only took one shot of five minutes there was only one file in the STREAM folder, which was a 598mb file numbered 00000.MTS
    I know I will regret it one day but i deleted the MTS file of my foot. Then I placed in one of the "homeless" MTS files into the (now empty) STREAM folder and changed the name to 0000.MTS -
    When opened in FCP log and transfer - IT DID NOT WORK. However, in TOAST I am happy to say IT WORKED, yayyyy. And the file that was converted was the length of the clip. What I thought might happen is that if the clip was only 30 seconds long but the foster file was 5 minutes then there would be 30 seconds of footage and 4.5 minutes of black empty space on video. That wasn't the case.
    THE SHORT TERM SOLUTION:
    Over the years I have been very very very lucky with all the help I have gotten over the internet and in my own small way I hope the following helps out others. So here is what I have done, with instructions.
    PLEASE NOTE I AM INDONESIAN LIVING IN INDONESIA SO WE USE PAL ..... 25 frames per second. .... Hopefully if someone in North America likes this idea and can help out, they will post a NTSC file structure.
    OKAY INSTRUCTIONS:
    1. Double click this link and download the file structure. The the file structure is only 68kb all your (and mine) frustrations for 68,000 bytes!!
    www.klirkom.com/helpout/AVCHD
    2. If you look inside the folder (AVCD->BDMV->STREAM) you will see AN EMPTY STREAM FOLDER. This is where you will place your "lost/not working" MTS files. Sorry, but this file structure only supports two files at a time. The first file is 5 minutes long, the second file is 33 minutes long (i was going to make it 10 minutes but I forgot about the camera being on and left it for 33 minutes ...)
    3. Change your first file name (preferably an MTS file less than 600mb so that the 'foster' file can cover it) to 00000.MTS. Now change your second file (preferably less than 4GB, but unless you are filming the whole wedding this should be okay for most people.... the important thing to remember is that if you want your full file converted without missing any parts at the end, the files should be smaller in size than the foster files) to the name 00001.MTS
    4. Now drag your newly named and newly placed MTS files into TOAST TITANIUM. Click convert. I have not tried this with other software but hopefully it works for you.
    5. If you want you can now rename your original MTS files to their original names and continue with the next two files. I know with 500 MTS files this is a HUGE pain in the buttox .....
    Okay, good luck all!!
    Jakartaguy

  • ADF11g - integration with OIM -folder structure for OIM configuration files

    Hi All,
    I'm trying to make call to a remote OIM using OIM API, from my ADF backing bean onclick of a button
    on the JSPX page.
    I'm able to compile the page, but the issues I'm facing is that I'm not able to read the configuration details
    specified for OIM connectivity from the OIM configuration files (authwl.conf, xl.policy, xlconfig.xml).
    So do any one know what is going to be the folder structure for these config files (OIM), when we are integrating to
    this API from ADF backing bean.
    Thanks All .
    Thanks & Regards,
    Dharmathej M

    Hi Daniel, thanks for the response but i readed that doc before asking here and that's one of the reasons of my question.
    On the first line of the doc it says *"This appendix includes instructions that describe how to configure WebSphere so that Oracle Identity Manager (OIM) and Oracle Identity Analytics (OIA) can be installed on separate servers."*
    It assumes both products are on different servers and also the creation of an new profile and node for oia.
    WAS_NDS_HOME/AppServer/bin/manageprofiles.sh -create*
    -templatePath WAS_NDS_HOME/AppServer/profileTemplates/managed*
    -profileName oia-managed01 -profilePath WAS_NDS_HOME/profiles/oia-managed01*
    -nodeName oia-managed01-node01 -hostname hostname*
    Integrate the OIA node to the OIM Cell by typing the following command on the OIA Machine:*
    cd OIM_HOME/xellerate/setup; ./xlAddNode.sh oia-managed01 oia-managed01-node01 192.168.21.9 8883 xelsysadm password1*
    Our intention is to use same machines, servers,... oim is using so we don't need to use extra machines or create extra websphere objects.
    Any tips on that?
    Regards.

Maybe you are looking for