Is OCS a real File system or Virtual file system?

Hello gurus,
I would like to ask pros some fundamental questions regarding OCS.
1. Are files and folders in ocs are stored as real system files and directories in OS somewhere ?
2. Are the files or folders all all maintained in database itself?
We are trying to accomplish following task pertaining to document management: I dont know if we can achieve this through Oracle content services.
We have bunch of documents that are attached to the Units ( Serialized) that we are selling.
example
1. SN1_A.doc ( sold from sales order 12345 and same documents referenced in Service request# A123)
2.SN1_B.doc (sold from sales order 12345 and same documents referenced in Service request# A123)
3. SN2_A.doc ( sold from sales order 12346 and referenced in SR# A124)
4. SN2_B.doc ( sold from sales order 12346 and referenced in SR# A124)
No we want two VIRTUAL directory views for Sales team and Service Team
Virtual view for Service team:
/Field Service/Service Requests/A123/ (will show SN1_A.doc and SN1_B.doc )
/Field Service/Service Requests/A124/(will show SN2_A.doc and SN2_B.doc )
Virtual view for Sales team:
/Field Sales/Sales Orders/12345 / (will show SN1_A.doc and SN1_B.doc )
/Field Sales/Sales Orders/12346 /(will show SN2_A.doc and SN2_B.doc )
Is this possible in OCS? Is this possible in any Document Management product?
Please help

Hello,
out of the box OCS content services won't do this for you -- at least not exactly.
What you're looking for is a way to tag documents with metadata.
For example, you would define a category called Sales and give it two attributes: sales order and service request.
You create a folder which forces the user the populate these fields when uploading or checking in a document.
A service technician could then use the content search facility (in the content browser) to look for documents with a service request of A123, while a salesman could search for documents tagged with the sales order 12345.
This does not give you the virtual folder look that you seem to be after, so that would be the part you'd have to custom code. But you'd be using the search mechanism I described above.
Regards,
Ingo

Similar Messages

  • Solaris Virtual File System

    Have Solaris virtual file system been changed since Solaris 2.5.1? How much?

    AFAIK, the VFS is not an official (and documented) interface
    and may change from solaris release to solaris release
    (perhaps even with a new kernel patch).
    Other, you can probably get the Solaris 8 Foundation Source,
    and use it as the definitive reference documentation ;-)

  • Re: Virtual File System and NCP Shares

    I can confirm that VFS does not work on NCP volumes. In fact an admin volume
    is not created without NSS.
    I'm using the X-plat APIs to manage trustees on NCP volumes.
    John
    "HBware (hans)" <[email protected]> wrote in message
    news:8W6dl.1960$[email protected]..
    > Hello does anyone know how to manage the filerights on non NSS NCP shares?
    >
    > I can't do this with Virtual File System, the shares aren't available as
    > volumes there.
    > Do you have to use the "old" netware NWAddTrusteeExt for this (it works),
    or
    > is there another library?
    >
    > Thx in advance
    > Hans
    >
    >

    with fuse you can do that i think.
    but I take it your thing is not posix compliant? aren't there enough fs'es already you could use instead? or do you mostly want to learn? in that case it's cool

  • Commons Virtual File System

    Commons Virtual File System
    Commons VFS provides a single API for accessing various different file systems. It presents a uniform view of the files from various different sources, such as the files on local disk, on an HTTP server, or inside a Zip archive.
    here is the link
    http://commons.apache.org/vfs/index.html
    Now i want to use this API please can any one give me links or suggest some books to explore this API
    i need some tutorials (basics) for understanding this API.
    Thanks in Advance.

    Roopesh_Saigaonkar
    Don't multipost the same question. I've removed the thread you started in the Java Desktop Applications forum one minute after this one.
    db

  • Solaris File System/ Virtual File system Documentation

    can anybody help me in finding solaris virtual file system documentaion/books ?
    thanks in advance,
    -mayur

    AFAIK, the VFS is not an official (and documented) interface
    and may change from solaris release to solaris release
    (perhaps even with a new kernel patch).
    Other, you can probably get the Solaris 8 Foundation Source,
    and use it as the definitive reference documentation ;-)

  • Virtual File System : vfs_freevfs function

    Does anybody know anything about Solaris virtual file system function "vfs_freevfs" from vfsops struct from
    vfs.h. Thanks,Veselka

    vfs_freevfs is callback which gets called when a file system is unmounted to release (free up) the resources held by the file system.
    Saurabh Mishra

  • Patch Fails: 119907-13 Gnome 2.6.0_x86: Virtual File System Framework patch

    When I try to use my Sun(TM)Update Manager, I get:
    119907-13 Gnome 2.6.0_x86: Virtual File System Framework patch Failed.
    Utility used to install the update failed with exit code 997. Patch is not available in the patch directory.
    It is the only patch left to be installed and I tried rebooting several times with no luck. There is no 119907-13 directory or 119907-13.jar in my /var/sadm/spool either. Thanks for any hints.
    --Bob                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

    Thanks.
    I have the free version of Solaris for the x86/x64 platform on my workstation at home. I did register the OS when I installed it, but I do not have a support contract.
    I am curious as to why this has never been an issue before with the hundreds of patches I've installed previously.
    If I do not wish to purchase a support contract, there ought to be a way to get the patch out of my Sun(TM) Update Manager so that the green indicator in my
    systray doesn't drive me nuts.
    Thanks for your help and any further suggestions.
    --Bob                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

  • Windows Server 2012 - Hyper-V - Cluster Sharded Storage - VHDX unexpectedly gets copied to System Volume Information by "System", Virtual Machines stops respondig

    We have a problem with one of our deployments of Windows Server 2012 Hyper-V with a 2 node cluster connected to a iSCSI SAN.
    Our setup:
    Hosts - Both run Windows Server 2012 Standard and are clustered.
    HP ProLiant G7, 24 GB RAM. This is the primary host and normaly all VMs run on this host.
    HP ProLiant G5, 20 GB RAM. This is the secondary host that and is intended to be used in case of failure of the primary host.
    We have no antivirus on the hosts and the scheduled ShadowCopy (previous version of files) is switched off.
    iSCSI SAN:
    QNAP NAS TS-869 Pro, 8 INTEL SSDSA2CW160G3 160 GB i a RAID 5 with a Host Spare. 2 Teamed NIC.
    Switch:
    DLINK DGS-1210-16 - Both the network cards of the Hosts that are dedicated to the Storage and the Storage itself are connected to the same switch and nothing else is connected to this switch.
    Virtual Machines:
    3 Windows Server 2012 Standard - 1 DC, 1 FileServer, 1 Application Server.
    1 Windows Server 2008 Standard Exchange Server.
    All VMs are using dynamic disks (as recommended by Microsoft).
    Updates
    We have applied the most resent updates to the Hosts, VMs and iSCSI SAN about 3 weeks ago with no change in our problem and we continually update the setup.
    Normal operation:
    Normally this setup works just fine and we see no real difference in speed in startup, file copy and processing speed in LoB applications of this setup compared to a single host with two 10000 RPM Disks. Normal network speed is 10-200 Mbit, but occasionally
    we see speeds up to 400 Mbit/s of combined read/write for instance during file repair.
    Our Problem:
    Our problem is that for some reason a random VHDX gets copied to System Volume Information by "System" of the Clusterd Shared Storage (i.e. C:\ClusterStorage\Volume1\System Volume Information).
    All VMs stops responding or responds very slowly during this copy process and you can for instance not send CTRL-ALT-DEL to a VM in the Hyper-V console, or for instance start task manager when already logged in.
    This happens at random and not every day and different VHDX files from different VMs gets copied each time. Some time it happens during daytime wich causes a lot of problems, especially when a 200 GB file gets copied (which take a lot of time).
    What it is not:
    We thought that this was connected to the backup, but the backup had finished 3 hours before the last time this happended and the backup never uses any of the files in System Volume Information so it is not the backup.
    An observation:
    When this happend today I switched on ShadowCopy (previous files) and set it to only to use 320 MB of storage and then the Copy Process stopped and the virtual Machines started responding again. This could be unrelated since there is no way to see
    how much of the VHDX that is left to be copied, so it might have been finished at the same time as I enabled  ShadowCopy (previos files).
    Our question:
    Why is a VHDX copied to System Volume Information when scheduled ShadowCopy (previous version of files) is switched off? As far as I know, nothing should be copied to this folder when this functionis switched off?
    List of VSS Writers:
    vssadmin 1.1 - Volume Shadow Copy Service administrative command-line tool
    (C) Copyright 2001-2012 Microsoft Corp.
    Writer name: 'Task Scheduler Writer'
       Writer Id: {d61d61c8-d73a-4eee-8cdd-f6f9786b7124}
       Writer Instance Id: {1bddd48e-5052-49db-9b07-b96f96727e6b}
       State: [1] Stable
       Last error: No error
    Writer name: 'VSS Metadata Store Writer'
       Writer Id: {75dfb225-e2e4-4d39-9ac9-ffaff65ddf06}
       Writer Instance Id: {088e7a7d-09a8-4cc6-a609-ad90e75ddc93}
       State: [1] Stable
       Last error: No error
    Writer name: 'Performance Counters Writer'
       Writer Id: {0bada1de-01a9-4625-8278-69e735f39dd2}
       Writer Instance Id: {f0086dda-9efc-47c5-8eb6-a944c3d09381}
       State: [1] Stable
       Last error: No error
    Writer name: 'System Writer'
       Writer Id: {e8132975-6f93-4464-a53e-1050253ae220}
       Writer Instance Id: {7848396d-00b1-47cd-8ba9-769b7ce402d2}
       State: [1] Stable
       Last error: No error
    Writer name: 'Microsoft Hyper-V VSS Writer'
       Writer Id: {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de}
       Writer Instance Id: {8b6c534a-18dd-4fff-b14e-1d4aebd1db74}
       State: [5] Waiting for completion
       Last error: No error
    Writer name: 'Cluster Shared Volume VSS Writer'
       Writer Id: {1072ae1c-e5a7-4ea1-9e4a-6f7964656570}
       Writer Instance Id: {d46c6a69-8b4a-4307-afcf-ca3611c7f680}
       State: [1] Stable
       Last error: No error
    Writer name: 'ASR Writer'
       Writer Id: {be000cbe-11fe-4426-9c58-531aa6355fc4}
       Writer Instance Id: {fc530484-71db-48c3-af5f-ef398070373e}
       State: [1] Stable
       Last error: No error
    Writer name: 'WMI Writer'
       Writer Id: {a6ad56c2-b509-4e6c-bb19-49d8f43532f0}
       Writer Instance Id: {3792e26e-c0d0-4901-b799-2e8d9ffe2085}
       State: [1] Stable
       Last error: No error
    Writer name: 'Registry Writer'
       Writer Id: {afbab4a2-367d-4d15-a586-71dbb18f8485}
       Writer Instance Id: {6ea65f92-e3fd-4a23-9e5f-b23de43bc756}
       State: [1] Stable
       Last error: No error
    Writer name: 'BITS Writer'
       Writer Id: {4969d978-be47-48b0-b100-f328f07ac1e0}
       Writer Instance Id: {71dc7876-2089-472c-8fed-4b8862037528}
       State: [1] Stable
       Last error: No error
    Writer name: 'Shadow Copy Optimization Writer'
       Writer Id: {4dc3bdd4-ab48-4d07-adb0-3bee2926fd7f}
       Writer Instance Id: {cb0c7fd8-1f5c-41bb-b2cc-82fabbdc466e}
       State: [1] Stable
       Last error: No error
    Writer name: 'Cluster Database'
       Writer Id: {41e12264-35d8-479b-8e5c-9b23d1dad37e}
       Writer Instance Id: {23320f7e-f165-409d-8456-5d7d8fbaefed}
       State: [1] Stable
       Last error: No error
    Writer name: 'COM+ REGDB Writer'
       Writer Id: {542da469-d3e1-473c-9f4f-7847f01fc64f}
       Writer Instance Id: {f23d0208-e569-48b0-ad30-1addb1a044af}
       State: [1] Stable
       Last error: No error
    Please note:
    Please only answer our question and do not offer any general optimization tips that do not directly adress the issue! We want the problem to go away, not to finish a bit faster!

    Hallo Lawrence!
    Thankyou for youre reply, some comments to help you and others who read this thread:
    First of all, we use Windows Server 2012 and the VHDX as I wrote in the headline and in the text in my post. We have not had this problem in similar setups with Windows Server 2008 R2, so the problem seem to be introduced in Windows Server 2012.
    These posts that you refer to seem to be outdated and/or do not apply to our configuration:
    The post about Dynamic Disks:
    http://technet.microsoft.com/en-us/library/ee941151(v=WS.10).aspx is only a recommendation for Windows Server 2008 R2 and the VHD format. Dynamic VHDX is indeed recommended by Microsoft when using Windows Server 2012 (please look in the optimization guide
    for Windows Server 2012).
    Infact, if we use fixed VHDX then we would have a bigger problem since fixed VHDX are generaly larger then Dynamic Disks, i.e. more data would be copied and that would take longer time = the VMs would be unresponsive for a longer time.
    The post "What's the deal with the System Volume Information folder"
    http://blogs.msdn.com/b/oldnewthing/archive/2003/11/20/55764.aspx is for Windows XP / Windows Server 2003 and some things has changed since then. for instance In Windows Server 2012, Shadow Copies cannot be controlled by going to Control panel -> System.
    Instead you right-click on a Drive (i.e. a Volume, for instance the C drive/Volume) in Computer and then click "Configure Shadow Copies".
    Windows Server 2008 R2 Backup problem
    http://social.technet.microsoft.com/Forums/en/windowsbackup/thread/0fc53adb-477d-425b-8c99-ad006e132336 - This post is about the Antivirus software trying to scan files used during backup that exists in the System Volume Information folder and we do not
    have any antivirus software installed on our hosts as I stated in my post.
    Comment that might help us:
    So according to “System Volume Information” definition, the operation you mentioned is Volume Shadow Copy. Check event viewer to find Volume Shadow Copy related event logs and post them.
    Why?
    Furhter investigation suggests that a volume shadow copy is somehow created even though the Schedule for Shadows Copies is turned off for all drives. This happens at random and we have not found any pattern. Yesterday this operation took almost all available
    disk space (over 200 GB), but all the disk space was released when I turned on scheduled Shadow Copies for the CSV.
    I therefore draw these conclusions:
    The CSV Volume has about 600 GB of disk space and since Volume Shadows Copy used 200 GB, or about 33% of the disk space, and the default limit is 10% then I conclude that for some reason the unscheduled Volume Shadow Copy did not have any limit (or ignored
    the limit).
    When I turned on the Schedule I also change the limit to the minimum amount which is 320 MB and this is probably what released the disk space. That is, the unscheduled Volume Shadow Copy operation was aborted and it adhered to the limit and deleted the
    Volume Shadow Copy it had taken.
    I have also set the limit for Volume Shadow Copies for all other volumes to 320 MB by using the "Configure Shadow Copies" Window that you open by right clicking on a drive (volume) in Computer and then selecting "Configure Shadow Copies...".
    It is important to note that setting a limit for Shadow Copy Storage, and disabaling the Schedule are two different things! It is possible to have unlimited storage for Shadow Copies when the Schedule is disabled, however I do not know if this was the case
    Before I enabled Shadow Copies on the CSV since I did not look for this.
    I now have defined a limit for Shadow Copy Storage to 320 MB on all drives and then no VHDX should be copied to System Volume Information since they are all larger than 320 MB.
    Does this sound about right or am I drawing the wrong conclusions?
    Limits for Shadow Copies:
    Below we list the limits for our two hosts:
    "Primary Host":
    C:\>vssadmin list shadowstorage
    vssadmin 1.1 - Volume Shadow Copy Service administrative command-line tool
    (C) Copyright 2001-2012 Microsoft Corp.
    Shadow Copy Storage association
       For volume: (\\?\Volume{e3ad7feb-178b-11e2-93e8-806e6f6e6963}\)\\?\Volume{e3ad7feb-178b-11e2-93e8-806e6f6e6963}\
       Shadow Copy Storage volume: (\\?\Volume{e3ad7feb-178b-11e2-93e8-806e6f6e6963}\)\\?\Volume{e3ad7feb-178b-11e2-93e8-806e6f6e6963}\
       Used Shadow Copy Storage space: 0 bytes (0%)
       Allocated Shadow Copy Storage space: 0 bytes (0%)
       Maximum Shadow Copy Storage space: 320 MB (91%)
    Shadow Copy Storage association
       For volume: (E:)\\?\Volume{dc0a177b-ab03-44c2-8ff6-499b29c3d5cc}\
       Shadow Copy Storage volume: (E:)\\?\Volume{dc0a177b-ab03-44c2-8ff6-499b29c3d5cc}\
       Used Shadow Copy Storage space: 0 bytes (0%)
       Allocated Shadow Copy Storage space: 0 bytes (0%)
       Maximum Shadow Copy Storage space: 320 MB (0%)
    Shadow Copy Storage association
       For volume: (G:)\\?\Volume{f58dc334-17be-11e2-93ee-9c8e991b7c20}\
       Shadow Copy Storage volume: (G:)\\?\Volume{f58dc334-17be-11e2-93ee-9c8e991b7c20}\
       Used Shadow Copy Storage space: 0 bytes (0%)
       Allocated Shadow Copy Storage space: 0 bytes (0%)
       Maximum Shadow Copy Storage space: 320 MB (3%)
    Shadow Copy Storage association
       For volume: (C:)\\?\Volume{e3ad7fec-178b-11e2-93e8-806e6f6e6963}\
       Shadow Copy Storage volume: (C:)\\?\Volume{e3ad7fec-178b-11e2-93e8-806e6f6e6963}\
       Used Shadow Copy Storage space: 0 bytes (0%)
       Allocated Shadow Copy Storage space: 0 bytes (0%)
       Maximum Shadow Copy Storage space: 320 MB (0%)
    C:\>cd \ClusterStorage\Volume1
    Secondary host:
    C:\>vssadmin list shadowstorage
    vssadmin 1.1 - Volume Shadow Copy Service administrative command-line tool
    (C) Copyright 2001-2012 Microsoft Corp.
    Shadow Copy Storage association
       For volume: (\\?\Volume{b2951138-f01e-11e1-93e8-806e6f6e6963}\)\\?\Volume{b2951138-f01e-11e1-93e8-806e6f6e6963}\
       Shadow Copy Storage volume: (\\?\Volume{b2951138-f01e-11e1-93e8-806e6f6e6963}\)\\?\Volume{b2951138-f01e-11e1-93e8-806e6f6e6963}\
       Used Shadow Copy Storage space: 0 bytes (0%)
       Allocated Shadow Copy Storage space: 0 bytes (0%)
       Maximum Shadow Copy Storage space: 35,0 MB (10%)
    Shadow Copy Storage association
       For volume: (D:)\\?\Volume{5228437e-9a01-4690-bc40-1df85a0e6736}\
       Shadow Copy Storage volume: (D:)\\?\Volume{5228437e-9a01-4690-bc40-1df85a0e6736}\
       Used Shadow Copy Storage space: 0 bytes (0%)
       Allocated Shadow Copy Storage space: 0 bytes (0%)
       Maximum Shadow Copy Storage space: 27,3 GB (10%)
    Shadow Copy Storage association
       For volume: (C:)\\?\Volume{b2951139-f01e-11e1-93e8-806e6f6e6963}\
       Shadow Copy Storage volume: (C:)\\?\Volume{b2951139-f01e-11e1-93e8-806e6f6e6963}\
       Used Shadow Copy Storage space: 0 bytes (0%)
       Allocated Shadow Copy Storage space: 0 bytes (0%)
       Maximum Shadow Copy Storage space: 6,80 GB (10%)
    C:\>
    There is something strange about the limits on the Secondary host!
    I have not in any way changed the settings on the Secondary host and as you can see, the Secondary host has a maximum limit of only 35 MB storage on the CSV, but it also shows that this is 10% of the Volume. This is clearly not the case since 10% if 600
    GB = 60 GB!
    The question is, why does it by default set a too small limit (i.e. < 320 MB) on the CSV and is this the cause of the problem? I.e. is the limit ignored since it is smaller than the smallest amount you can provide using the GUI?
    Is the default 35 MB maximum Shadow Copy limit a bug, or is there any logical reason for setting a limit that according to the GUI is too small?

  • How to get the real path of the xml file

    I have a java application
    following is the package structure
    com>>gts>>xml
    having file---------> MyXML.xml
    com>>gts>>java
    having java program to read the file
    Problem is if I use File file = new File("..\\xml\\MyXML.xml");
    java.io.FileNotFoundException: E:\LEARNING_WORK_SPACE\JavaXml\..\xml\MyXml.xml (The system cannot find the path specified)
         at java.io.FileInputStream.open(Native Method)
         at java.io.FileInputStream.<init>(Unknown Source)
         at java.io.FileInputStream.<init>(Unknown Source)
         at sun.net.www.protocol.file.FileURLConnection.connect(Unknown Source)
    How do I get the real path of the xml file.
    Edited by: shashiwagh on Jan 29, 2010 11:46 AM

    Hi,
    if your XML file is inside a package you can easily get it from the classloader.
    Note that your application maybe packaged inside a jar so it is not safe to use java.io.File for this purpose.
    You have an xml file in :
    com/gts/xml/MyXML.xml
    in a class of the same module (that will packaged in the same jar) for example com.gts.java.XmlLoader :
    // To get the stream :
    InputStream is = this.getClass().getResourceAsStream("/com/gts/xml/MyXML.xml");
    // Or the URL :
    URL xml = this.getClass().getResource("/com/gts/xml/MyXML.xml");Hope it helps.

  • When you create a PostScript file you must rely on system fonts and use document fonts

    I got this error:
    "When you create a PostScript file you must rely on system fonts and use document fonts. Please go to the printer properties, "Adobe PDF Settings" page and turn OFF the option "Rely on system fonts only; do not use document fonts."
    This strikes me as bad practice. I only want to use system fonts and document fonts will muck me up.
    That being said, I could not easily find a way to fix this with Acrobat Pro XI. I opened Control Panel (Windows 8) and found the printer and mucked around in the various ways to get to change the PDF settings (there are several ways, and some of them present the choices as bold and not just regular). I found two separate switches for "Rely on system fonts only; do not use document fonts" and turned both off, BUT that did not work.
    Frustrated, I went back to FrameMaker and clicked File > Print Setup and then for the PDF printer, Printer Properties. There I found a THIRD instance of "Rely on system fonts only; do not use document fonts" that was set differently than the other two. This was the setting the error message meant. Change this one.
    Thanks for the convoluted UI, Adobe. What a pain, but there is the answer for all who follow after....

    > Windows 7 and Frame 7.2.
    This is not a supported configuration and probably does not work, due (I'm guessing) to driver API changes from XP to Win7, which would affect generating PDF and Ps output. This would affect Save-as-PDF and Print-to-Ps+Distill (unless you have a newer version of the full Acrobat product).
    As it happens, I'm going to attempt something similar: FM 7.0 on Win7 64 Pro. However, I'm going to install that old FM in "XP Mode".
    XP Mode is a 32-bit virtual machine running actual Windows XP inside Win7. It is (now maybe "was") available for Windows 7 Enterprise, Professional and Ultimate (but not Basic, Home or not-so-Premium). When XP went off support life earlier this month, Mr.Bill may have taken down the ability  to download XP Mode (or not, since some large enterprises are able to  purchase continued support for XP at some great cost). XP Mode never was supported for Win8. There are other VMs for Windows available.
    If XP Mode is still available, you also need a CPU that has hardware virtualization, which all recent 64-bit AMD processors do, but which is fused-off in many low end 64-bit Intel processors. AMD processors need a separate unobvious hot-fix patch installed before you do anything else about XP Mode.

  • File system and LR2 file system view summary

    Hello,
    Is there a way when navigating a directory via the Folders panel to click on the folder and get summary information about the contents of the folder both from a file system perspective as well as a Lightroom perspective?
    file system perspective: number of physical files supported by LR2 in the folder
    LR2 perspective: the number of files that have virtual copies, how many total virtual copies total, how many photos are in stacks, how many stacks, etc.
    Thanks,
    Matt
    -Windows XP x64 SP2
    -AMD Athlon 64 X2 Dual Core 6400+
    -4.00 GB of RAM
    -Lightroom Version 2.0 64-bit 481478 Camera Raw 4.5
    -Two Silicon Graphics GDM-5011P Monitors
    -Huey Pro with version 1.5 of their calibration/profiling software
    -Nvidia Evga 8800 GT graphics card with 512 MB of Video RAM
    -Monitor One: The 'normal' Lightroom window in either the Library or Develop Module
    -Monitor Two: 'Loupe-Normal' in 1:1 view

                   No, you can't specify an entire filesystem as something to cluster. I
              suggest using NFS or some shared filesystem and then running a cluster
              off of that.
                                       - Mike
              Michael Lenart wrote:
              >
              > yes
              > "Mike Benham" <[email protected]> wrote in message
              > news:[email protected]..
              > >
              > >
              > >
              > > I'm not sure if I understand what you're asking. How can you cluster a
              > > filesystem? It seems like you want to mount some filesystem via NFS and
              > > then access that from each instance of a server? Are you asking if
              > > Weblogic can perform the duty of NFS?
              > >
              > > - Mike
              > >
              > >
              > > Michael Lenart wrote:
              > > >
              > > > Can the file system be clustered?? For example, in the cluster level
              > > > property file can I have a file system with the name of XDrive and then
              > use
              > > > that file system in my components??
              

  • File IO takeover - Virtual filesystem

    How can you create an empty file, and when any program reads from the file, the content shuld come from somewhere else. And when any program writes to the file the bytes to be written goes some where else then to the disk.
    I have been looking at java.nio.. + FileChannel but I don't know if this would even be posible.
    Just to get started I would like to have a simple program that "graps" on to an empty file, sending anything written to the file (by other programs) to the system output stream, and not to the empty file.
    My ide would be something like this :
    FileChannel cha = new RandomAccessFile("file","rw").getChannel();
    // map/redirect cha to MyFileChannel
    class MyFileChannel extends FileChannel {
    // overwride methods in FileChannel
    public ... write(...) {
    System.out.println(...);
    Would this be posible ? Have I misunderstood the FileChannel ?

    It is importent to me that the solution is platform
    indepentent : works on Linux( standard
    Ubuntu/Fedora/Nowell - desktop instalation ),
    Windows, Mac, Solaris.
    This remote directory shuld be
    accesable throu the local file system, so that you
    can do all the stuff that you can do on real local
    files .Stating it again - impossible. Doesn't even matter what language you use.
    You are describing something that it fundamentally basic to an OS. There are probably OSes that doen't have a file system but you are unlikely to ever see one. Educational courses in Operating Systems always cover the file system.
    A file system depends on the hardware of the system. A device that uses a flash rom is completely different than a system that uses a hard driver. But even systems that use hard drives use different hardware and assumptions about the data that is on the hardware.
    If you look at the specification for C/C++ you will find that although IO operations are supported it is at the stream and random access level. There are no disk management interfaces in the specifications.
    Each OS has a different API to access the file system. And specifics of that varies by OS.
    Perhaps if you altered your requirements to the file level (not byte read level) then you can manage files. Even with java. You simply delete a old file and rename a file with new data to have the name of the old file. Then the next request gets the new data.
    Can I somehow create a fake usb-disk and controll the IO ? Is it possible to do what you are asking (with no restrictions)? Yes. You would have to write a lot of software for each specific targeted system (and version specific as well.) You will need to use C/C++ and you will have to learn a LOT about the internals of each OS. You will need to learn a lot about C/C++ as well.
    The solution will be very platform dependent. I would suspect that most of the code will specific to each target.

  • How to make bank management system using java file system

    Hi, I have some fields
    1.ID
    2. Deposite
    3. Withdraw
    4. Balance
    Now how can i manage this Bank Management System using java file system.
    Thanks in advance.

    Then we're back to (1): Do your own homework. Google has zillions of links on handling files in Java. When you have written some code and have an actual problem, we'll be happy to help you with it.
    (edit) Incidentally, this sounds suspiciously like the sort of problem they set for the certification programs. In which case, don't bother; they're not worth the virtual paper they're printed on.

  • VFM (Virtual File Manager): End of life and replacement products

    In our organization, we are heavy users of VFM (Virtual File Manager) for DFS management and file replication.  Unfortunately VFM is going end of life in Nov 2012 and we have not been able to find suitable product(s) to replace its functionality.We have looked at many alternate products, including Autovirt and ScriptLogic SecureCopy for DFS management and file migration/replication and have not found a suitable replacement so-far.  The products we have seen so-far do not scale well for larger environments and do not provide the ability to import or batch create tasks.  For the DFS management functionality, it looks like Netapp may be able to create a custom tool for us, but we have not found anything suitable from the replication/migration side of things.  Either way, off the shelf products would be preferable to a custom made tool if available.Here is a breakdown of the VFM features that we use and their importance to our teams:Feature UsedFeature TypeFeature DescriptionImportanceAvailability PoliciesAdmin View, Namespace PoliciesDFS Link replicationHigh, used to replicate DFS namespace between serversBackup PoliciesAdmin View, Namespace PoliciesBackup DFS namespaceHigh, used for contingencyClient Recovery PoliciesAdmin View, Business Continuity PolicyAlllows failover of DFS links from PROD to DRHigh, used for BCP / DRCreation PoliciesAdmin View, Namespace PoliciesFiler recovery policies Admin View, Business Continuity PolicyDFS link managementVery, currently used for contingencyMigration PoliciesAdmin View, Data Movement PolicyData copy/ migrationHigh, many migrationsReplication PoliciesAdmin View, Data Movement PolicyServer recovery policiesAdmin View, Business Continuity PolicyDFS link managementVery, currently used for contingencyVFM GUILogical View, add linksDFS link addition / update / deleteVery, used to manage all links
    We are looking for suggestions for replacement products for the functionality of VFM.  Has anyone identified a successor for VFM that would still work within larger environments?  Any suggestions would be appreciated.

    You can get the same functionality that was provided by this product if you take a slightly different approach by using symlinks or CIFS widelinks.  I've been using a script I wrote which effectively replaced what we were doing in Netapp VFM. I create symlinks in the main share that I consider to be the primary tier of storage. The symlinks are to the folders residing on secondary SATA tiers of storage. This script parses through the secondary tier, then adds said symlink into the main tier. To enable this you basically change the options in your share to allow external shares to be traversed (though, it's on the same storage system itself). Create a hidden clone share for the primary tier and change the option on that share to not follow external links/only allow internal. That share will then show the secondary tier folders as 0 byte files that are the shortcuts, you can literally manage those by deleting those files, or using the remove-nafile command in the PowerShell toolkit. This is 7-mode still, but you could do this approach all the same w/ cifs widelinks. This script itself only parses and creates the symlinks that point to secondary tiers of storage, it isn't doing any data migration. You could very easily integrate it into a data migration workflow where you scanned your main tier for folders that have files that haven't been modified in say a year, then auto move those to the SATA tier, then run this script. You could go one step further and auto move them back to primary if you detect your SATA share were in use again, just delete the symlink shortcut and migrate back to primary by doing a standard copy from the one share to other. In addition, this in my opinion is better than using a DFS shortcut for reach folder on each tier of storage whereever it was (like NetApp/Brocade VFM did). Essentially the client was handling linking the shortcuts. This moves the responsibility to the server building and presenting the share and the client knows no different. On a Mac system, this makes a HUGE difference as it is only seeing one DFS pointer in this approach. You can download it here: https://gallery.technet.microsoft.com/scriptcenter/NetApp-symlink-generator-e1de2185

  • CS5 save as CS4 makes Large File and sees virtual memory issue

    Two CS5 flash files. FileSource 27Mb  File Destination 21.8Mb
    They open in 80 seconds for each.
    FileSource has 15 MovieClips, 5 show areas of the map in a variety of colours, shape intensive one could say ! These are made into one movie clip and copy pasted to the FileDestination. In CS5 it then crashes.
    I turned these instead into CS4 files, they open in 10 seconds ! WOW !!!  I needed to adjust the text which had become two lines from one though.
    I try again for the CopyPaste of the movie Clip from library to library, this time I get a message Flash is out of  memory. To increase virtual memory, quit Flash and allocate more virtual memory in the Systems Properties dialog box.
    In there I see
    Initial   2046Mb
    Max size 4092Mb
    Recommended 4605Mb
    I alter this to:
    Initial  3800Mb
    Max    4600Mb
    Reboot PC (Win XP SP2 wih 4Gb ram, purpose built video editing suite) and try again, but again get the memory message. Such a powerful PC should handle Flash. PC at work also crashing with CS5.
    I thought I was winning with the quick handling of the files in CS4 but File size is now alarmingly FileSource  76.1Mb  and FileDestination 58.4Mb   WHY ? and why does it open them so quick but fall over on the copy paste ?
    CS4 opens these in 8 not 80 secs, but that save as CS4 has bloated their file size, quick to open but causing virtual memory failure. 
    I am defeated
    Does CS5 have some way of keeping size down ? I can’t use CS5 as it crashes when pasting the MovieClip from the source file to destination.
    Can I get Flash to simplify the layers perhaps ? There are many shapes of the same colour which perhaps could be fused into one, lessening the number of vector edges. How is that done ? I am making an swf file for a clients powerpoint slide show.
    Envirographics

    Possible good news. I've been away and hadn't opened DW since also having had a problem with Finder and external drives on my return. When I opened it today, and needed to do a 'Save As' , it did it with no problem of rendering. Though there have been no DW updates, there was updates to PS and ID. So I'll tentatively say the problem has gone away - hopefully for good.
    Just editing this post. I happened to turn  Live View on etc.. for the first time in a couple of weeks due to the  above. I realsied this morning I hadn't turned it on. Now it is just as  before when I try to 'Save As' - blank, an then have to close and reopen  for it to render (even though the Live View/Live Code is now stiched  off).
    No I don't know how to turn this thread back into an 'Unanswered' question.
    Message was edited by: tgiadd

Maybe you are looking for