Filesystem Storage

Hi there,
I'm currently evaluating Kodo 3.2.3 and couldn't find a satisfying solution
to the following problem:
Due to customer requirements we need to provide a store which makes use only
of files and directories of the filsystem. Performance issuses must not be
taken into account. Each object should be a single file. For searching there
should be one or more index files. The index files must not contain any data
which is not part of an object. It should be multi user aware using
something like lockfiles.
Did anyone implement something like that yet?
Will it be possible to implement such a store with a moderate effort based
on the kodo framework?
regrads
Markus Stier
Stegmann Systemberatung

Did anyone implement something like that yet?
Will it be possible to implement such a store with a moderate effort based
on the kodo framework?First, I'd try to convince the client to use a file-bsed SQL database like
Hypersonic SQL or JDatastore, both of which Kodo supports.
But Kodo does allow for creating custom back-ends. In fact, we include a demo
of a custom Kodo store that reads and writes XML files. You can use this sample
to build your own store. See src/kodo/xmlstore/ for the implementation code,
and samples/xmlstore for some sample client code using the XML store.

Similar Messages

  • SAP Content Server 6.40 on Windows 2003, IIS 6, filesystem storage

    Hello:
    We have installed SAP Content Sever 6.40 on Windows Server 2003, IIS 6, and using the filesystem as storage, not SAPDB.
    The installation goes without any issues and completes successfully, but the test URL http://<host>:<port>/ContentServer/ContentServer.dll?serverInfo returns "The page cannot be displayed" and not the expected system info/status. We need to fix this before moving on to config in ERP, any suggestions into Content Server or IIS settings/configuration is appreciated.
    We're using the standard install port 1090, and testing from a browser in the same host.
    Regards,
    Eduardo

    Additional information:
    Looking in the event viewer under application, I see the following 2 errors:
    1) The HTTP Filter DLL D:\Program Files\SAP\Content Server\ContentServer.dll failed to load.  The data is the error.
    Data: 0000: 0000007f
    2) Could not load all ISAPI filters for site/service.  Therefore startup aborted.
    Regards,
    Eduardo

  • Database base storage stress test

    Hi,
    I have a ucm 11g instance up and running with SSO.
    I need to test database storage against filesystem storage.
    Does anyone have any suggestions on how to test this?
    Also I'm sure I read somewhere (but I can't find it now) that once I go in one direction I can't go back, does anyone have any details about this?
    Many thanks
    James

    First of all, take a look at File Store Provider (here: http://download.oracle.com/docs/cd/E14571_01/doc.1111/e10792/c02_settings005.htm#CSMSP445)
    In general, filesystem storage is always used even if you store data in the database, in the end. It is, though, understood as a temporary storage. In theory, you can keep your files in both locations and I see no reason why you should not be able to go from FS to DB and back, BUT you have to consider consequences (you might have to rebuild indexes or even migrate data from one storage to the other).
    As for stress tests, first you have to decide WHAT you want to test. Potential candidates are:
    - checkin of a single item (wasted effort: since FS is always used as an intermediate storage it will always be a bit faster)
    - mass checkin (e.g. from Batch Loader - especially if you use Fast Checkin settings, db can be a bit faster, but you will need a real lot of small files)
    - search
    - update (metadata - wasted effort: should be the same)
    - backup
    - migration of content
    Then, you will have to setup two environments with more-or-less the same conditions (CPU power, memory, disk speed).
    And finally, you will have to create and run you test cases. I'd suggest to automate stress tests via writing a program calling the same services with the same data. Use WebServices (if non-Java) or RIDC (if Java).
    Alternatively, if your task is "to get results" rather than "perform stress tests", you could try to approach consulting services or project managers to provide some normalized results for you. Something can be obtained in this whitepaper: http://www.oracle.com/us/products/middleware/content-management/ecm-extreme-performance-wp-077977.pdf

  • Dataguard using ASM in primary node and FILESYSTEM in standby node

    Hi There!
    I need to configure a Dataguard, and my primary DB is working on ASM, the standby node is going to be using filesystems, so I want to know if there is a guide that I can follow in order to get this task successfully completed.
    My operating system is Solaris 10, and the DB Release is 10.2.0.4. I want to know the best practices.
    Also I want to remember some RMAN Restore and Recover techniques to have this configuration ready, up and running.
    Thanks in Advance.
    Paola
    @>--->----

    setting this up isn't that different to setting up a single instance standby on ASM,
    the MAA guide is here
    http://www.oracle.com/technology/deploy/availability/pdf/MAA_WP_10g_RACPrimarySingleInstancePhysicalStandby.pdf
    this details the configuration using ASM on both sides, if you want to use filesystem storage on the standby then the following parameters will need to be changed
    *.db_file_name_convert='+DATA/CHICAGO/','+DATA/BOSTON/','+RECOVERY/CHICAGO','+RECOVERY/BOSTON'
    *.log_file_name_convert='+DATA/CHICAGO/','+DATA/BOSTON/','+RECOVERY/CHICAGO','+RECOVERY/BOSTON'
    to something like
    *.db_file_name_convert='+DATA/CHICAGO/','/oradata/boston','+RECOVERY/CHICAGO','/recovery/boston'
    *.log_file_name_convert='+DATA/CHICAGO/','/oradata/boston','+RECOVERY/CHICAGO','/recovery/boston'
    alternativly you can set the DB_CREATE_FILE_DEST and DB_CREATE_ONLINE_LOG_DEST_n parameters and let oracle create OMF files for the dataguard instance.
    Chris

  • [Help]Playing MP3 files with Juk have to wait for 1 hr to hear voice

    I am not sure how long should wait, but difinately longer than 10min. If you are playing a playlist, you have to wait a long time for next song...
    Observations:
    1. You can hear voice after all, that means the hardware and codec is working properly.
    2. You can play the mp3 file with smplayer without difficulty. that means the filesystem/storage hardware should'nt be problematic.
    3. When waiting for playing songs, the CPU usage of Juk is remaining on about 3%. I believe it is a reasonable value and does not indicating any problem of the program.
    Have anybody here got this problem before? Help!

    in case your phonon backend is gstreamer, change it to xine. that's what is recommended by amarok devs.

  • Error while creating application on Oracle IPM

    While trying to create an application on Oracle Imaging and Process Management (IPM) the following error is encountered-
    Event generated by user 'IPM_SystemServiceUser' at host 'CIS'. Cannot create application. Unable to execute service method 'ipmConfigureApplicationProfiles'. Null pointer is dereferenced. [ Details ]
    An error has occurred. The stack trace below shows more information.
    *!csUserEventMessage,IPM_SystemServiceUser,CIS!$!csIpmCannotCreateApp!csUnableToExecMethod,ipmConfigureApplicationProfiles!syNullPointerException*
    intradoc.common.ServiceException: !csIpmCannotCreateApp!csUnableToExecMethod,ipmConfigureApplicationProfiles
    **ScriptStack IPM_CREATE_APPLICATION*
    *3:ipmConfigureApplicationProfiles,**no captured values***
    at intradoc.server.ServiceRequestImplementor.buildServiceException(ServiceRequestImplementor.java:2115)
    at intradoc.server.Service.buildServiceException(Service.java:2260)
    at intradoc.server.Service.createServiceExceptionEx(Service.java:2254)
    at intradoc.server.Service.createServiceException(Service.java:2249)
    at intradoc.server.Service.doCodeEx(Service.java:584)
    at intradoc.server.Service.doCode(Service.java:505)
    at intradoc.server.ServiceRequestImplementor.doAction(ServiceRequestImplementor.java:1643)
    at intradoc.server.Service.doAction(Service.java:477)
    at intradoc.server.ServiceRequestImplementor.doActions(ServiceRequestImplementor.java:1458)
    at intradoc.server.Service.doActions(Service.java:472)
    at intradoc.server.ServiceRequestImplementor.executeActions(ServiceRequestImplementor.java:1391)
    at intradoc.server.Service.executeActions(Service.java:458)
    at intradoc.server.ServiceRequestImplementor.doRequest(ServiceRequestImplementor.java:737)
    at intradoc.server.Service.doRequest(Service.java:1890)
    at intradoc.server.ServiceManager.processCommand(ServiceManager.java:435)
    at intradoc.server.IdcServerThread.processRequest(IdcServerThread.java:265)
    at intradoc.server.IdcServerThread.run(IdcServerThread.java:160)
    at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:528)
    at weblogic.work.ExecuteThread.execute(ExecuteThread.java:209)
    at weblogic.work.ExecuteThread.run(ExecuteThread.java:178)
    Caused by: java.lang.NullPointerException
    at oracle.imaging.intradoc.MetaDataUtils.addOptionValue(MetaDataUtils.java:123)
    at oracle.imaging.intradoc.IpmRepositoryProfileUtils.createOrUpdateApplicationProfile(IpmRepositoryProfileUtils.java:765)
    at oracle.imaging.intradoc.IpmApplicationService.ipmConfigureApplicationProfiles(IpmApplicationService.java:478)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at intradoc.common.IdcMethodHolder.invokeMethod(IdcMethodHolder.java:86)
    at intradoc.common.ClassHelperUtils.executeMethodEx(ClassHelperUtils.java:310)
    at intradoc.common.ClassHelperUtils.executeMethod(ClassHelperUtils.java:295)
    at intradoc.server.Service.doCodeEx(Service.java:550)
    ... 15 more
    The component IpmRepository is enabled.
    The following is the configuration used while creating the application on IPM
    Application Security -weblogic
    Document Security - Administrators (group)
    Storage policy-
    Document storage - Database AAArule (custom file storage rule)
    Supporting content storage - Database AAArule (custom file storage rule)
    No workflow configuration defined
    Have tried creating application with other security groups and filesystem storage but the error persists

    Hi,
    There seems to issues in the profiles which being created in UCM.
    Can you check the Trigger Field which has been configured for the profiles ? If the trigger field has been configured as Type, can you change it to any of the other custom defined Information fields and then create the application.
    To access the Trigger Fields navigate to the WCC web console, Administration --> Admin Applets --> Config Manager --> Profiles --> Trigger Field.
    Thanks
    Aditya

  • Questions for Content Server

    Dear All;
    l have setup the Content server in Windows 2003 Standard Edition using the filesystem storage (not the MaxDB database), and the size of the document could be approximately 80GB (for the next few years), l didn't setup the Cache server. 
    All the attachment in the ECC will store into this External SAP Content Server.
    Question 1:
    Can anyone advise whether the Content Server(filesystem) will be having performance issue while store/retrieve documents?  Is there any way to improve the performance of the Content Server?
    Question 2: 
    Security. what are the necessary security option that l need to configure for the Content server? at the moment, the Windows server still configure as full access for the Content Repository folder.  Otherwise the ECC will be having access denied problem when communicating with the Content Server.
    Question 3:
    There are some attachment in ECC, how to l move those documents to Content Server?
    Please advise.
    Many thanks
    Jordan

    When I say use certificates, there should be a checkbox with "check signature". This ensure that only sap machines with an active certificate can access the content server.
    On the file system, I would ensure that no users have access to the file system through the backend. If they can modify documents, you will be sitting with major problems from a document integrity and legal perspective.
    On the issue of migration, there is an oss note for the migration of data from one repository to another for archivelink.
    Check note number 1043676
    Symptom
    You have created a new repository, and would like to migrate the ArchiveLink Documents to the new repository.
    Other terms
    Migration of documents, ArchiveLink, repository
    Reason and Prerequisites
    This report helps migration of ArchiveLink documents.
    To use this report effectively, you would need to ensure that you apply the note 732436. However, this note will allow the migration of documents from and to type of repositories which are supported by ArchiveLink. The types of repositories that are supported by ArchiveLink are HTTP, RFC and R/3 Database.
    Solution
    Before you execute the report, you must set up a repository that can
    store the documents (transaction OAC0). The repository may be in an
    external archive(connected via  HTTP or RFC) or (as of Basis Release 6.10) in the OLTP database of the SAP system. You can also use SAP Content Server as an external archive for storing documents.
    The report has the following parameters:
      OLD_ARC ID of the old repository previously used
      NEW_ARC ID of the new repository
      TEST_RUN: If you enable this, only a test run occurs, copy of       documents does not occur.
      DEL_FILE: If you enable this, then the files would be deleted from the old repository.
    Launch the transaction SE38 and create a program with the name ZMIGRATE_ARCHIVELINK_FILES. Once the program is created, now copy the code from the correction instruction.

  • New Non-ASM Standby Trying to use ASM during recovery

    Oracle 11.2.0.3 on RH6 x86_64. Cross-posting from oracle-l.
    We have a database on ASM. We want to migrate to filesystem storage on same host (Oracle ZFS). Recommended path from Oracle is to create a standby and then do a failover when ready. Simple enough you'd think.
    Standby has all reference to ASM diskgroups removed, and convert parameters set appropriately. Take a new backup including archivelogs and also a backup standby controlfile. The "duplicate target database for standby" performs the restore phase perfectly fine. When the media recovery phase starts, I see it tries to mount the diskgroup that the primary uses for ASM. However it fails to do so (plenty of errors to alert log), then recovery fails and the instance is left in mount mode. Subsequent attempt to run "recover database" or even "crosscheck archivelog all" run into the same ASM errors.
    The one odd thing I see is the reference to the srvctl resource name for that diskgroup:
    Mon Aug 19 10:41:35 2013
    ERROR: failed to establish dependency between database prod_zfs and diskgroup resource ora.FRA.dg
    However I never registered prod_zfs in srvctl, and it still isn't listed when I run "srvctl config database".
    There are no references to ASM paths in the standby v$logfile, v$datafie, v$tempfile or v$archived_log. Created a trace controlfile and the ASM paths it uses are for logfiles and datafiles which are successfully restored on disk and renamed in the controlfile.
    Wondering if any of you have seen this.

    Here's an example of what I can reproduce 100%of the time:
    RMAN> backup database plus archivelog;
    Starting implicit crosscheck backup at 08/19/2013 16:28:53
    using target database control file instead of recovery catalog
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: SID=487 device type=DISK
    allocated channel: ORA_DISK_2
    channel ORA_DISK_2: SID=498 device type=DISK
    allocated channel: ORA_DISK_3
    channel ORA_DISK_3: SID=509 device type=DISK
    allocated channel: ORA_DISK_4
    channel ORA_DISK_4: SID=564 device type=DISK
    ORA-03113: end-of-file on communication channel
    ORA-01403: no data found
    ORA-01403: no data found
    ORA-03114: not connected to ORACLE
    ORA-03114: not connected to ORACLE
    ORA-03114: not connected to ORACLE
    ORA-03114: not connected to ORACLE
    ORA-03114: not connected to ORACLE
    ORA-03114: not connected to ORACLE
    ORA-03114: not connected to ORACLE
    ORA-03114: not connected to ORACLE
    ORA-03114: not connected to ORACLE
    ORA-03114: not connected to ORACLE
    ORA-03114: not connected to ORACLE
    ORA-03114: not connected to ORACLE
    ORA-03114: not connected to ORACLE
    ORA-03114: not connected to ORACLE
    ORA-03114: not connected to ORACLE
    ORA-03114: not connected to ORACLE
    ORA-03114: not connected to ORACLE
    ... (repeated many times)
    Crosschecked 63 objects
    ORA-03113: end-of-file on communication channel
    ORA-03114: not connected to ORACLE
    ORA-03114: not connected to ORACLE
    ORA-03114: not connected to ORACLE
    ORA-03114: not connected to ORACLE
    ORA-03114: not connected to ORACLE
    ORA-03114: not connected to ORACLE
    ORA-03114: not connected to ORACLE
    ORA-03114: not connected to ORACLE
    ORA-03114: not connected to ORACLE
    ORA-03114: not connected to ORACLE
    ORA-03114: not connected to ORACLE
    ORA-03114: not connected to ORACLE
    ORA-03114: not connected to ORACLE
    ORA-03114: not connected to ORACLE
    ORA-03114: not connected to ORACLE
    ORA-03114: not connected to ORACLE
    ... (repeated many times)
    ORA-03114: not connected to ORACLE
    ORA-03114: not connected to ORACLE
    ORA-03114: not connected to ORACLE
    ORA-03113: end-of-file on communication channel
    ORA-03114: not connected to ORACLE
    ORA-03114: not connected to ORACLE
    ORA-03114: not connected to ORACLE
    ORA-03114: not connected to ORACLE
    ... (repeated many times)
    ORA-03114: not connected to ORACLE
    ORA-03113: end-of-file on communication channel
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of backup plus archivelog command at 08/19/2013 16:28:58
    ORA-03114: not connected to ORACLE

  • Flash, PHP, Javascript... and popup's

    HI all,
    I don't know if this is the wright place, but here's a
    problem that at leat all the scripts I've seen doesn't seem to
    help.
    I have a web page with php. Inside it I load a Flash movie.
    Inside it I have several objects that are loaded dynamically, and
    at each one I give a certain int value. Lets call it "index".
    When I click in one of the objects a javascript function is
    called to popup another webpage with another Flash movie. When the
    object is clicked, the index is passed to the javascript that
    opens, thru the window.open command, a new window.
    Since many people have popup blockers, and this kind of
    solution is annoying, do you know of any script in any programming
    language (preferably in Javascript or PHP), that can get all the
    data from the main flash movie and open another php file, with a
    effect similar to lightbox, and passing values to this second
    window.
    Has far as I experimented, this kind of solution, only works
    for images. Do you know if it's possible, or have any better
    sugestion?
    Thanks

    IFS may be excessive for what you are after. It really only integrates well with Java, and not PHP unfortunately.
    From a licensing perspective iFS requires either a Oracle Database or Oracle Application Server license in order to run legally.
    Maybe look towards filesystem storage instead of mysql blob, and just store a pointer to the file in the database.

  • Adding or Using OVM templates in the EM12c software library

    Hi,
    I hope someone can help me :-) I have em12c U1 and OVM Manager (3.0.3) running. I want to use the self-service portal to deplay OVM templates, but I do not seem to figure out how to add my existing OVM templates in the software library and google is not helping either :-(
    Does anyone have any idea how to do this?

    There are two options to upload OVM Templates and Assemblies in the the EM software library:
    Option 1: Using “Upload File Location”
    Pre-req: The template or assembly file is downloaded and stored on a machine running EM 12 agent.
    Option 2: Using “Referenced File Location”
    Pre-req: The template or assembly file is downloaded and stored in an HTTP accessible location.
    By definition in Enterprise Manager, an assembly is a .ova file and a OVM Template is a .tgz file.
    I am describing the more common option 1 here:
    1.     Navigate to Software Library Administration page:
    2.     Configure the storage for “Upload File Locations” tab. Note that ONLY “OMS Shared Filesystem” storage type is supported for Oracle Virtual Assemblies.
    Create a directory on the OMS machine and added as software library storage.
    See "Getting Started" section in Cloud Admin Guide:
    http://docs.oracle.com/cd/E24628_01/doc.121/e28814/toc.htm
    3.     Create a new folder “Cloud Components” in the software library
    4.     Create a “New Entity” for Oracle Virtual Assembly if you are using .ova files. or create a "New Entity" for Templates if you are using the .tgz files
    5.     Give a name for the assembly or template component
    6.     If you are using the .ova files, then you need to do this extra step of adding an attachment. Browse and pick the .ovf file as attachment. It’ll be named as descriptor.ovf
    Note that you will have to unpackage the .ova assembly file and store the .ovf descriptor separately on the machine.
    7.     Go to next step to upload the .ova or .tgz file.
    8.     Specify the software library location on the OMS machine as destination to upload the .ova file
    9.     Make sure you pick “Agent Machine” as the source for the .ova or .tgz file.
    Due to the 2 MB restriction, Local Machine cannot be used as a source for Oracle virtual assemblies.
    10.     Specify the agent machine where the .ova or .tgz file is located and “Add” the file location
    11.     Click next to go to the “Review” step and then “Save and Upload.”
    Please make sure you pick “Save and Upload” and NOT “Save.”
    12.     Click on the newly create assembly component to verify the details:
    13.     Verify the following:
    -     Describe tab: Shows an attachment descriptor.ovf (this is only if you are using assemblies)
    -     Upload Files tab: Should show the .ova or .tgz file in associated files
    -     Customize tab: Should show the assembly structure

  • IBM Guardium with Exadata !

    hi,
    I have a question:
    Do The IBM's Guardium can monitor the Exadata database completely ?
    Are there any unsupported from Oracle when using Exadata monitored by IBM's Guardium ?
    Thanks.

    File change monitoring on Exadata would work much the same way as on any other Oracle 11gR2 database running RAC and ASM. My question would be: what type of files are you looking to monitor for changes? In the Exadata context:
    - Oracle datafiles, redo logs, control files, and parameter files are stored in ASM
    - Archivelogs, flashback logs, etc are stored in the FRA, which is typically ASM as well
    - Database software files, audit trail, and message logs are stored on local Linux ext3 filesystems
    - Storage server software and logs are stored on storage servers. Oracle odes not permit third-party monitoring agents, such as Guardium, to be installed on storage servers. This is akin to most SAN vendors who do not allow third-party monitoring tools to run on SAN controllers either.
    Hope this helps!
    Marc

  • [Request] diskWriggler - Hard Disk Benchmarking Tool

    Hi,
    Would someone be interested in creating a package for this?
    "diskWriggler™ is a benchmark tool for testing hard disk based storage throughput. It has been designed to provide a report that is meaningful to systems engineers working in the film and post-production industries."
    This software is released under the GNU General Public License version 2.
    Homepage: http://www.xdt.com.au/Products/diskWriggler/
    Current Version: 1.0.2
    Thanks

    # Contributor: Eric Oden <oden>
    pkgname=diskwriggler
    pkgver=1.0.2
    pkgrel=1
    pkgdesc="diskWriggler™ is a benchmark tool for testing filesystem storage throughput of film or video frames as sequential files or as frames contained in one large file."
    url="http://www.xdt.com.au/Products/diskWriggler/"
    license='GPL'
    depends=()
    source=(http://www.xdt.com.au/Development/Downloads/diskWriggler-1.0.2.tgz)
    md5sums=('78fc59c9960dc7b5a1907a4c1a1628f6')
    build() {
    cd $startdir/src/diskWriggler-$pkgver/src
    make || return 1
    install -D -m755 diskWriggler $startdir/pkg/usr/bin/diskWriggler || return 1
    Wasn't sure if GCC needed to be added as dependency since it's needed by a lot of the core packages. I'll throw it up on aur later today if no one finds major issues with the PKGBUILD.

  • Read files and folders from a CD

    Hi there,
    I am not sure this is the right forum for my question. Please redirect me if I am in the wrong place.
    I have an AIR app that will be installed from a CD. The client will be changing certain data (like images and video files) regularly and wants to be able to simply write a cd with the new files in a folder and the AIR app install.
    I have searched high ad low for ways to read files from a CD. I could do this if I knew the path, but on each system it is different.
    Is there anything which can tell me what the path to the CD drive is or a way to package AIR so that when it installs it looks for a directory on the CD it is installing from and copies it to the AIR application directory?
    Any help is welcome!
    Thanks in advance,
    Nikki

    Hi - it looks like you can find the CD drive using the getStorageVolumes() method of StorageVolumeInfo:
    http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/filesystem/Storag eVolumeInfo.html#getStorageVolumes%28%29
    So you'd  iterate through all the storage volumes, and for the ones that are removable (isRemovable property), check for the existence of some file you know will be on your CD (a unique name), and you would then know that is your install CD.
    -rich

  • DMS - Storage Filesystem

    Dear all,
    We need to implement the DMS to put documents via transaction CV01N.
    At the moment we have done this using the option to storage the documents in a database table.
    But we need to storage the documents in an external filesystem. Can you provide the rigth configuration that we need to implement in the transaction OAC0.
    Best Regards
    Thanks in advanced

    Hi Pedro,
    You can store the documents on a fiel system, but why dont you install the content server with maxDB?
    That is a really good option, and historically works really well.
    Espen

  • Migrating Oracle from Solaris to Linux: Trouble converting FileSystem metadata with fscdsconv in Linux (Veritas Storage Foundation)

    Hello everyone,
    I'm in the process of migrating an Oracle DataBase from Solaris 5.10 server to Linux Red Hat 6.4, the storage used is SAN and volumes managed by Veritas Storage Foundation on Solaris
    At this point, I'm trying to convert the byte order of some volumes that come from Solaris in my Linux server using the following command:
    /opt/VRTS/bin/fscdsconv -y -e -f /tmp/vxConv/dbtemp01.tmp -t os_name=Linux,arch=x86 /dev/vx/rdsk/dgtemp/dbtemp01
    Note: Because of the text formating, the above command could span several lines, but I'm executing it in just one
    And I get the following errors:
    UX:vxfs fscdsconv: ERROR: V-3-20012: not a valid vxfs file system
    UX:vxfs fscdsconv: ERROR: V-3-24426: fscdsconv: Failed to migrate
    Searching for a solution of these errors in the Veritas forums, I've found this post, where the user mikebounds gives some steps to migrate Solaris to Linux. I've found that I've replicated these steps but I get stuck on the fscdsconv because the beforementioned errors.
    Does anyone know what could be happening here or have any sugestion to share?
    Software Versions involved:
    RHEL 6.4 x86_64
    Veritas Storage Foundation Enterprise 6.2.0.100 on Linux
    Solaris 5.10
    Veritas 5.0 on Solaris
    Disk layout v7
    vxfs filesystem format
    Oracle 10g
    Thank you very much in advance for any help/ideas to solve this
    Best regards
    Raul

    have you seen this - How to migrate a data store from Solaris to Linux? (Doc ID 1302794.1)
    HTH,
    Pradeep

Maybe you are looking for

  • ACTIONS ON ALV REPORT FOR CHECKBOX?

    hi All, i have to display ALVreport with one of the columns has checkbox.whenever user presses any checkbox,in repected row values have to be validated and updated in DataBase. My First Doubt is,How can i get the column of checkBoxes in ALV Report.?

  • How to have confirmed quantity 0 for credit blocked sales orders

    Hi! I have the following problem. I have one sales order blocked because credit. Using material A the quantity confirmed is 0 but using other material B the quantity is confirmed. All the materials have no avaliability check group no arelevant for va

  • File Adapter Complex File Type

    Hi All i have been pulling my hair out for the last to days with this issue. I have a CSV file(delimited by , and record delimtedd by EOL) that needs to be processed.. i am currently running into 2 issues. The file itself is a complex file type (not

  • How to kill the hanged JFrame in swing?

    How to kill the hanged frame in swing? I am opening multiple JFrame and working on them.These frames are plcaed in JDesktopPane. If one frame is hanged up then i could not work on others . I need to kill the hanged frame. Assist me.

  • IPhone 3G blemish - advice needed

    Hi. I got my iPhone last Friday like most people who have been waiting months for it to come out. On a close inspection there is a slight blemish noticeable on the iPhone 3G back coating. To describe it - well think of painting wood with gloss, if yo