Generate XSL as a CLOB without file system

I use XDK for PL/SQL in Oracle 8i.
I need a way to generate an XSL without any acces to a file system whatsoever. So this example here below wouldn't work becouse it needs a path to a file in a file system.
ss := xslprocessor.newStylesheet(xsldoc, dir | | '\' | | xslfile);
I was wondering if there was a similar way of generating an XSL, like the way they generate XML documents in Oracle 9i (XMLTYPE).
SYS.XMLTYPE.CREATEXML(<?xml version="1.0" ?><the whole xml document>');
Thanks in advance
Alex

Sounds like you are running into the same issue as this thread
XSD as a Constant Error (ORA-31000)
so hard-coding the URL in the SELECT statement may resolve the issue you are running into (assuming 10.2.x.x)
Edited by: A_Non on Oct 30, 2008 10:43 AM
(clarified version)

Similar Messages

  • How to generate XSL for an XML file to use it for XSLT transformation -SSIS?

    Hi All,
    Can anybody please help me to generate XSL for my attached XML file?
    I need to use the XSL file for XSLT transformation.
    Thanks & Regards,
    Sri

    Hi Vibhav,
    Thanks for the response.
    I am aware of the process but not sure how to generate XSL file. 
    Can you please refer me to any tool which can convert XML to XSL? or please can you transform my simple XML to XSL?
    Thanks & Regards,
    Sri

  • How can I access the Server file system without using any signed applet?

    Is it possible for me to run an applet on the client machine such that the client can view my server file system and perform uploading and downloading of files through the applet without signing the applet?

    Add the following in your java.policy file, your plug in accesses.
    grant {
    permission java.permission.AllPermission;

  • Save bsp generated HTML on file system

    I need to would like to save the HTMLB  generated from a BSP page in an HTML page on the native file system.
    Could anyone provide assistance?
    Thank you
    Chris

    I think that what you need is to call the BSP url from ABAP code in a program. I have found an example of a function which would be useful.
    CALL FUNCTION 'HTTP_GET'
    EXPORTING
    ABSOLUTE_URI = 'your BSP url'
    RFC_DESTINATION = 'SAPHTTP'.
    BLANKSTOCRLF = 'Y'
    TABLES
    REQUEST_ENTITY_BODY = BODY_REQ.
    LOOP AT BODY_REQ.
    ENDLOOP.
    Look for more examples of this HTTP_GET function.
    Regards.

  • Give ZEN app file system rights so can install without user login

    ZfD 6.5.2 NW 6.5.5
    Is it possible to give an app rights to the file system on a Netware
    box to find the msi it needs to install even if there isn't a user
    logged in to the workstation?
    We have an app associated with workstations and when we tried to push
    it a number of installs failed because the users were logging on
    workstation only.
    This is also relevant to apps which install as "unsecure system user"
    because they do not inherit the logged-in user's Netware file system
    rights. In the past we've given the [public] trustee rights in order to
    get round this problem but would like a better solution.
    Anthony

    See .....................
    https://secure-support.novell.com/Ka...AL_Public.html
    Craig Wilson - MCNE, MCSE, CCNA
    Novell Support Forums Volunteer Sysop
    Novell does not officially monitor these forums.
    Suggestions/Opinions/Statements made by me are solely my own.
    These thoughts may not be shared either Novell or any rational human.
    "Craig Wilson" <[email protected]> wrote in message
    news:[email protected]...
    > Note: You could also configure the MSI app to "Force Cache". This way the
    > install source would be cached to the local PC.
    >
    >
    > --
    > Craig Wilson - MCNE, MCSE, CCNA
    > Novell Support Forums Volunteer Sysop
    >
    > Novell does not officially monitor these forums.
    >
    > Suggestions/Opinions/Statements made by me are solely my own.
    > These thoughts may not be shared either Novell or any rational human.
    >
    > "Craig Wilson" <[email protected]> wrote in message
    > news:[email protected]...
    >> You may need to assign the MSI to the Workstation object and set the
    >> application to "Distribute in Workstation Security Space if Workstation
    >> Associated".
    >>
    >> Apps can be configured to use "User" or "Workstation/System" credentials,
    >> but an app will never try one if the other is not available. It simply
    >> uses the one for which it is configured.
    >>
    >> There is a really nice TID someplace that shows what security space
    >> different parts of ZEN run in, but I cant find it at the moment.
    >> I will keep looking, but many somebody else knows where it is.
    >>
    >> In regards to your missing icons, most likely nobody has seen it or has
    >> any ideas what it may be.
    >> I could not.
    >>
    >> If posts go unanswered, you can always try reposting and mentioning you
    >> did not get an answer previously.
    >> I know that I dont answer unless I have a good idea of what is wrong.
    >> Tossing out guesses my dissuade others from giving their thoughts.
    >> But once they know you are not getting any answers, folks tend to toss
    >> out more guesses.
    >>
    >> --
    >> Craig Wilson - MCNE, MCSE, CCNA
    >> Novell Support Forums Volunteer Sysop
    >>
    >> Novell does not officially monitor these forums.
    >>
    >> Suggestions/Opinions/Statements made by me are solely my own.
    >> These thoughts may not be shared either Novell or any rational human.
    >>
    >> "Anthony Hilton" <[email protected]> wrote in message
    >> news:[email protected]...
    >>> Anthony Hilton wrote:
    >>>
    >>>> Craig Wilson wrote:
    >>>>
    >>>> > Grant Rights to the "Workstation Object".
    >>>> >
    >>>> > This will address some of the issues.
    >>>> > Be sure to not use "Mapped" drives as well.
    >>>>
    >>>> Thanks Craig. I'll do that through the workstation group which the zen
    >>>> app is associated with.
    >>>>
    >>>> Yes, the app uses UNC path.
    >>>>
    >>>> I'm glad you're still here - my 2 previous threads (17 April and 9 May
    >>>> both about missing icons) have gone un-answered and I was beginning to
    >>>> wonder whether everyone had moved over to the Zfd7 forums.
    >>>>
    >>>
    >>> No success yet.
    >>>
    >>> The Workstation group already had RF rights to the directory containing
    >>> the msi. The workstation's effective rights show RF to the msi itself
    >>> but running the Zen app gives msi error 1620 which suggests either no
    >>> access to the source or a share name over 12 characters.
    >>>
    >>> \\server\sys\public\it\zenapps\supplier_opthalmolo gy\supplier_opthalmolo
    >>> gy.msi doesn't seem to breach the 12 character limit.
    >>>
    >>> Any other ideas?
    >>>
    >>> Anthony
    >>>
    >>> --
    >>>
    >>
    >>
    >
    >

  • Crystal Report failed scheduling onto the File System

    Hi,
    We have set up a Crystal report file for scheduling via file system. This report has been running fine with regards to generation all through last few months, but has started failing to generate the report at the scheduled time onto the file system from the last few days. When I checked the logs of the Instance Manager, it shows an error like this 'Unable to connect: incorrect log on parameters. Details: [Database Vendor Code: 1017 ]'.
    Please note, I was able to run the report from InfoView without any issue, so I don't believe there is any issue with stored procedure/report/database connection but something else.
    Can someone please provide some insights on this.
    Thanks.

    Hello Satish,
    Is this issue specific to FileSystem location? What about default enterprise location?
    When you view the reprt in InfoView, click on Refresh button to get current data. If the repot was published with Saved Data option, then you would see data in the report but it would be old one.
    Above step would confirm that there is no issues with database user credentials specified with the report.
    Also, open the report in Crystal Reports 2011, refresh it and publish without saved data in BI 4 repository and schedule it.
    Regards,
    Mahesh

  • Store large volume of Image files, what is better ?  File System or Oracle

    I am working on a IM (Image Management) software that need to store and manage over 8.000.000 images.
    I am not sure if I have to use File System to store images or database (blob or clob).
    Until now I only used File System.
    Could someone that already have any experience with store large volume of images tell me what is the advantages and disadvantages to use File System or to use Oracle Database ?
    My initial database will have 8.000.000 images and it will grow 3.000.000 at year.
    Each image will have sizes between 200 KB and 8 MB, but the mean is 300 KB.
    I am using Oracle 10g I. I read in others forums about postgresql and firebird, that isn't good store images on database because always database crashes.
    I need to know if with Oracle is the same and why. Can I trust in Oracle for this large service ? There are tips to store files on database ?
    Thank's for help.
    Best Regards,
    Eduardo
    Brazil.

    1) Assuming I'm doing my math correctly, you're talking about an initial load of 2.4 TB of images with roughly 0.9 TB added per year, right? That sort of data volume certainly isn't going to cause Oracle to crash, but it does put you into the realm of a rather large database, so you have to be rather careful with the architecture.
    2) CLOBs store Character Large OBjects, so you would not use a CLOB to store binary data. You can use a BLOB. And that may be fine if you just want the database to be a bit-bucket for images. Given the volume of images you are going to have, though, I'm going to wager that you'll want the database to be a bit more sophisticated about how the images are handled, so you probably want to use [Oracle interMedia|http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14302/ch_intr.htm#IMURG1000] and store the data in OrdImage columns which provides a number of interfaces to better manage the data.
    3) Storing the data in a database would generally strike me as preferrable if only because of the recoverability implications. If you store data on a file system, you are inevitably going to have cases where an application writes a file and the transaction to insert the row into the database fails or a the transaction to delete a row from the database succeeds before the file is deleted, which can make things inconsistent (images with nothing in the database and database rows with no corresponding images). If something fails, you also can't restore the file system and the database to the same point in time.
    4) Given the volume of data you're dealing with, you may want to look closely at moving to 11g. There are substantial benefits to storing large objects in 11g with Advanced Compression (allowing you to compress the data in LOBs automatically and to automatically de-dupe data if you have similar images). SecureFile LOBs can also be used to substantially reduce the amount of REDO that gets generated when inserting data into a LOB column.
    Justin

  • Xml clob to file

    We are using the following java to write a PL/SQL generated xml clobs to the file system. The help in this forum has been priceless.
    At this point the xml filesare being produced but have character issues. I am guessing it is basic JAVA and yes I am new to JAVA.
    1. Small clobs have trailing box characters (carraige returns)
    2. Larger files have tags broken
    Any input is appreciated.
    Thanks
    create or replace and compile java source named sjs.write_CLOB as
    import java.io.*;
    import java.sql.*;
    import java.math.*;
    import oracle.sql.*;
    import oracle.jdbc.driver.*;
    public class write_CLOB extends Object
    public static void pass_str_array(oracle.sql.CLOB p_in,java.lang.String f_in)
    throws java.sql.SQLException, IOException
    File target = new File(f_in);
    FileWriter fw = new FileWriter(target);
    BufferedWriter out = new BufferedWriter(fw);
    Reader is = p_in.getCharacterStream();
    char buffer[] = new char[8192];
    int length;
    while( (length=is.read(buffer,0,8192)) != -1) {
    out.write(buffer);
    // out.newLine();
    is.close();
    fw.close();
    /

    We are still hung up on this. I tried implementing the code from STEVE'S XML book but still haven't resovled it.
    The clob is being created via XSU see below. The new char[8192] appeasr to force the output file to 8K
    with trailing characters on small clobs but adds a carraige return each 8K on larger ones.
    As usual any input is appreciated from all. Doese anyone know of a good JAVA forum like this one?
    Thanks
    PROCEDURE BuildXml(v_return OUT INTEGER, v_message OUT VARCHAR2,string_in VARCHAR2,xml_CLOB OUT NOCOPY CLOB) IS
    queryCtx DBMS_XMLquery.ctxType;
    Buffer RAW(1024);
    Amount BINARY_INTEGER := 1024;
    Position INTEGER := 1;
    sql_string VARCHAR2(2000) := string_in;
    BEGIN
    v_return := 1;
    v_message := 'BuildXml completed succesfully.';
    queryCtx := DBMS_XMLQuery.newContext(sql_string);
    xml_CLOB := DBMS_XMLQuery.getXML(queryCtx);
    DBMS_XMLQuery.closeContext(queryCtx);
    EXCEPTION WHEN OTHERS THEN
    v_return := 0;
    v_message := 'BuildXml failed - '||SQLERRM;
    END BuildXml;
    PROCEDURE WriteCLOB(v_return OUT INTEGER, v_message OUT VARCHAR2,result IN OUT NOCOPY CLOB,TargetDirectory IN VARCHAR2,FileName IN VARCHAR2) IS
    BEGIN
    v_return := 1;
    v_message := 'WriteCLOB completed succesfully.';
    write_CLOB(result,REPLACE(TargetDirectory||'\'||FileName,'\','/'));
    EXCEPTION WHEN OTHERS THEN
    v_return := 0;
    v_message := 'WriteCLOB failed - '||SQLERRM;
    END WriteCLOB;
    create or replace and compile java source named sjs.write_CLOB as
    import java.io.*;
    import java.sql.*;
    import java.math.*;
    import oracle.sql.*;
    import oracle.jdbc.driver.*;
    public class write_CLOB extends Object
    public static void pass_str_array(oracle.sql.CLOB p_in,java.lang.String f_in)
    throws java.sql.SQLException, IOException
    File target = new File(f_in);
    FileWriter fw = new FileWriter(target);
    BufferedWriter out = new BufferedWriter(fw);
    Reader is = p_in.getCharacterStream();
    char buffer[] = new char[8192];
    int length;
    while( (length=is.read(buffer)) != -1) {
    out.write(buffer);
    is.close();
    fw.close();
    /

  • Mounting the Root File System into RAM

    Hi,
    I had been wondering, recently, how can one copy the entire root hierarchy, or wanted parts of it, into RAM, mount it at startup, and use it as the root itself.  At shutdown, the modified files and directories would be synchronized back to the non-volatile storage. This synchronization could also be performed manually, before shutting down.
    I have now succeeded, at least it seems, in performing such a task. There are still some issues.
    For anyone interested, I will be describing how I have done it, and I will provide the files that I have worked with.
    A custom kernel hook is used to (overall):
    Mount the non-volatile root in a mountpoint in the initramfs. I used /root_source
    Mount the volatile ramdisk in a mountpoint in the initramfs. I used /root_ram
    Copy the non-volatile content into the ramdisk.
    Remount by binding each of these two mountpoints in the new root, so that we can have access to both volumes in the new ramdisk root itself once the root is changed, to synchronize back any modified RAM content to the non-volatile storage medium: /rootfs/rootfs_{source,ram}
    A mount handler is set (mount_handler) to a custom function, which mounts, by binding, the new ramdisk root into a root that will be switched to by the kernel.
    To integrate this hook into a initramfs, a preset is needed.
    I added this hook (named "ram") as the last one in mkinitcpio.conf. -- Adding it before some other hooks did not seem to work; and even now, it sometimes does not detect the physical disk.
    The kernel needs to be passed some custom arguments; at a minimum, these are required: ram=1
    When shutting down, the ramdisk contents is synchronized back with the source root, by the means of a bash script. This script can be run manually to save one's work before/without shutting down. For this (shutdown) event, I made a custom systemd service file.
    I chose to use unison to synchronize between the volatile and the non-volatile mediums. When synchronizing, nothing in the directory structure should be modified, because unison will not synchronize those changes in the end; it will complain, and exit with an error, although it will still synchronize the rest. Thus, I recommend that if you synch manually (by running /root/Documents/rootfs/unmount-root-fs.sh, for example), do not execute any other command before synchronization has completed, because ~/.bash_history, for example, would be updated, and unison would not update this file.
    Some prerequisites exist (by default):
        Packages: unison(, cp), find, cpio, rsync and, of course, any any other packages which you can mount your root file system (type) with. I have included these: mount.{,cifs,fuse,ntfs,ntfs-3g,lowntfs-3g,nfs,nfs4}, so you may need to install ntfs-3g the nfs-related packages (nfs-utils?), or remove the unwanted "mount.+" entires from /etc/initcpio/install/ram.
        Referencing paths:
            The variables:
                source=
                temporary=
            ...should have the same value in all of these files:
                "/etc/initcpio/hooks/ram"
                "/root/Documents/rootfs/unmount-root-fs.sh"
                "/root/.rsync/exclude.txt"    -- Should correspond.
            This is needed to sync the RAM disk back to the hard disk.
        I think that it is required to have the old root and the new root mountpoints directly residing at the root / of the initramfs, from what I have noticed. For example, "/new_root" and "/old_root".
    Here are all the accepted and used parameters:
        Parameter                       Allowed Values                                          Default Value        Considered Values                         Description
        root                                 Default (UUID=+,/dev/disk/by-*/*)            None                     Any string                                      The source root
        rootfstype                       Default of "-t <types>" of "mount"           "auto"                    Any string                                      The FS type of the source root.
        rootflags                         Default of "-o <options>" of "mount"        None                     Any string                                      Options when mounting the source root.
        ram                                 Any string                                                  None                     "1"                                                  If this hook sould be run.
        ramfstype                       Default of "-t <types>" of "mount"           "auto"                     Any string                                      The FS type of the RAM disk.
        ramflags                         Default of "-o <options>" of "mount"        "size=50%"           Any string                                       Options when mounting the RAM disk.
        ramcleanup                    Any string                                                   None                     "0"                                                  If any left-overs should be cleaned.
        ramcleanup_source       Any string                                                   None                     "1"                                                  If the source root should be unmounted.
        ram_transfer_tool          cp,find,cpio,rsync,unison                            unison                   cp,find,cpio,rsync                           What tool to use to transfer the root into RAM.
        ram_unison_fastcheck   true,false,default,yes,no,auto                    "default"                true,false,default,yes,no,auto        Argument to unison's "fastcheck" parameter. Relevant if ram_transfer_tool=unison.
        ramdisk_cache_use        0,1                                                              None                    0                                                      If unison should use any available cache. Relevant if ram_transfer_tool=unison.
        ramdisk_cache_update   0,1                                                              None                    0                                                     If unison should copy the cache to the RAM disk. Relevant if ram_transfer_tool=unison.
    This is the basic setup.
    Optionally:
        I disabled /tmp as a tmpfs mountpoint: "systemctl mask tmp.mount" which executes "ln -s '/dev/null' '/etc/systemd/system/tmp.mount' ". I have included "/etc/systemd/system/tmp.mount" amongst the files.
        I unmount /dev/shm at each startup, using ExecStart from "/etc/systemd/system/ram.service".
    Here are the updated (version 3) files, archived: Root_RAM_FS.tar (I did not find a way to attach files -- does Arch forums allow attachments?)
    I decided to separate the functionalities "mounting from various sources", and "mounting the root into RAM". Currently, I am working only on mounting the root into RAM. This is why the names of some files changed.
    Of course, use what you need from the provided files.
    Here are the values for the time spend copying during startup for each transfer tool. The size of the entire root FS was 1.2 GB:
        find+cpio:  2:10s (2:12s on slower hardware)
        unison:      3:10s - 4:00s
        cp:             4 minutes (31 minutes on slower hardware)
        rsync:        4:40s (55 minutes on slower hardware)
        Beware that the find/cpio option is currently broken; it is available to be selected, but it will not work when being used.
    These are the remaining issues:
        find+cpio option does not create any destination files.
        (On some older hardware) When booting up, the source disk is not always detected.
        When booting up, the custom initramfs is not detected, after it has been updated from the RAM disk. I think this represents an issue with synchronizing back to the source root.
    Inconveniences:
        Unison needs to perform an update detection at each startup.
        initramfs' ash does not parse wild characters to use "cp".
    That's about what I can think of for now.
    I will gladly try to answer any questions.
    I don't consider myself a UNIX expert, so I would like to know your suggestions for improvement, especially from who consider themselves so.
    Last edited by AGT (2014-05-20 23:21:45)

    How did you use/test unison? In my case, unison, of course, is used in the cpio image, where there are no cache files, because unison has not been run yet in the initcpio image, before it had a chance to be used during boot time, to generate them; and during start up is when it is used; when it creates the archives. ...a circular dependency. Yet, files changed by the user would still need to be traversed to detect changes. So, I think that even providing pre-made cache files would not guarantee that they would be valid at start up, for all configurations of installation. -- I think, though, that these cache files could be copied/saved from the initcpio image to the root (disk and RAM), after they have been created, and used next time by copying them in the initcpio image during each start up. I think $HOME would need to be set.
    Unison was not using any cache previously anyway. I was aware of that, but I wanted to prove it by deleting any cache files remaining.
    Unison, actually, was slower (4 minutes) the first time it ran in the VM, compared to the physical hardware (3:10s). I have not measured the time for its subsequent runs, but It seemed that it was faster after the first run. The VM was hosted on a newer machine than what I have used so far: the VM host has an i3-3227U at 1.9 GHz CPU with 2 cores/4 threads and 8 GB of RAM (4 GB ware dedicated to the VM); my hardware has a Pentium B940 at 2 GHz CPU with 2 cores/2 threads and 4 GB of RAM.
    I could see that, in the VM, rsync and cp were copying faster than on my hardware; they were scrolling quicker.
    Grub, initially complains that there is no image, and shows a "Press any key to continue" message; if you continue, the kernel panics.
    I'll try using "poll_device()". What arguments does it need? More than just the device; also the number of seconds to wait?
    Last edited by AGT (2014-05-20 16:49:35)

  • How to get current local  file system volume sizes information from OMS?

    Hi
    I know I can get this information from the table SYSMAN.MGMT$STORAGE_REPORT_LOCALFS.
    But info stored in this table is not always up to date, but when going to the page em/console/monitoring/hostFilesystemOverview$target=xxxhostnamexxx$type=host$pageType=current$ctxType=Hosts the information is current.
    I have accessed mentioned table in apex, from outside OMS, I would like to have current information instead of old.
    How to do that?
    Thanks

    I think that there is nothing wrong with this table, just that the data is not collected every 5 minutes or so, only on a daily basis.
    But as ca107207 said - when you go to the page, showing information about the files system, from the host home page - the data is up to date to the current second.
    Therefore I think that OMS ask agent to send this value, but it is then not stored anywhere.
    My question would be how to ask agent from outside OMS to get this information?
    I have done a little reverse engineering on OMS packages and there should be a way to get this, using some procedures, pl/sql code with generating some cursors etc.
    I'm not that good in pl/sql to create something like that, it would take too much time for me. I think that it can't be done without OMS at all, I just have another database with apex on the same host and apex display some information for other users about the file systems etc. It would be nice to have information about file system usage current.
    Thanks

  • SAP GoLive : File System Response Times and Online Redologs design

    Hello,
    A SAP Going Live Verification session has just been performed on our SAP Production environnement.
    SAP ECC6
    Oracle 10.2.0.2
    Solaris 10
    As usual, we received database configuration instructions, but I'm a little bit skeptical about two of them :
    1/
    We have been told that our file system read response times "do not meet the standard requirements"
    The following datafile has ben considered having a too high average read time per block.
    File name -Blocks read  -  Avg. read time (ms)  -Total read time per datafile (ms)
    /oracle/PMA/sapdata5/sr3700_10/sr3700.data10          67534                         23                               1553282
    I'm surprised that an average read time of 23ms is considered a high value. What are exactly those "standard requirements" ?
    2/
    We have been asked  to increase the size of the online redo logs which are already quite large (54Mb).
    Actually we have BW loading that generates "Chekpoint not comlete" message every night.
    I've read in sap note 79341 that :
    "The disadvantage of big redo log files is the lower checkpoint frequency and the longer time Oracle needs for an instance recovery."
    Frankly, I have problems undertanding this sentence.
    Frequent checkpoints means more redo log file switches, means more archive redo log files generated. right ?
    But how is it that frequent chekpoints should decrease the time necessary for recovery ?
    Thank you.
    Any useful help would be appreciated.

    Hello
    >> I'm surprised that an average read time of 23ms is considered a high value. What are exactly those "standard requirements" ?
    The recommended ("standard") values are published at the end of sapnote #322896.
    23 ms seems really a little bit high to me - for example we have round about 4 to 6 ms on our productive system (with SAN storage).
    >> Frequent checkpoints means more redo log file switches, means more archive redo log files generated. right?
    Correct.
    >> But how is it that frequent chekpoints should decrease the time necessary for recovery ?
    A checkpoint is occured on every logswitch (of the online redologfiles). On a checkpoint event the following 3 things are happening in an oracle database:
    Every dirty block in the buffer cache is written down to the datafiles
    The latest SCN is written (updated) into the datafile header
    The latest SCN is also written to the controlfiles
    If your redologfiles are larger ... checkpoints are not happening so often and in this case the dirty buffers are not written down to the datafiles (in the case of no free space in the buffer cache is needed). So if your instance crashes you need to apply more redologs to the datafiles to be in a consistent state (roll forward). If you have smaller redologfiles more log switches are occured and so the SCNs in the data file headers (and the corresponding data) are closer to the newest SCN -> ergo the recovery is faster.
    But this concept does not really fit the reality because of oracle implements some algorithm to reduce the workload for the DBWR in the case of a checkpoint.
    There are also several parameters (depends on the oracle version) which control that a required recovery time is kept. (for example FAST_START_MTTR_TARGET)
    Regards
    Stefan

  • How to get access to the local file system when running with Web Start

    I'm trying to create a JavaFX app that reads and writes image files to the local file system. Unfortunately, when I run it using the JNLP file that NetBeans generates, I get access permission errors when I try to create an Image object from a .png file.
    Is there any way to make this work in Netbeans? I assume I need to sign the jar or something? I tried turning "Enable Web Start" on in the application settings, and "self-sign by generated key", but that made it so the app wouldn't launch at all using the JNLP file.

    Same as usual as with any other web start app : sign the app or modify the policies of the local JRE. Better sign the app with a temp certificate.
    As for the 2nd error (signed app does not launch), I have no idea as I haven't tried using JWS with FX 2.0 yet. Try to activate console and loggin in Java's control panel options (in W7, JWS logs are in c:\users\<userid>\appdata\LocalLow\Sun\Java\Deployment\log) and see if anything appear here.
    Anyway JWS errors are notoriously not easy to figure out and the whole technology in itself is temperamental. Find the tool named JaNeLA on the web it will help you analyze syntax error in your JNLP (though it is not aware of the new syntax introduced for FX 2.0 and may produce lots of errors on those) and head to the JWS forum (Java Web Start & JNLP Andrew Thompson who dwells over there is the author of JaNeLA).

  • How do I use a NAS file system attached to my router to store iTunes purchases?

    We have four Windows devices networked in our house.  They all run iTunes with the same Apple ID so when any one of them has iTunes running, we can see that computer on our Apple TV.  Two run Windows XP, one runs Vista Business, and the newest runs Windows 7.  Upgrading all to the same Windows software is out of the question.  Our NAS is hung on an "off warranty" Linksys E3000 router which communicates via USB with a 1TB NTFS formatted Western Digital hard drive.  Our plan when we set up this little network, not cheaply, in our 1939 baloon construction tank of a house, was to build our library of digital images, music, and video primarily on that device.  It supports a variant of Windows streaming support, but poorly.  The best streaming support for our environment seems to be our ethernet connected Apple TV which is hung off the big TV in our family room, and has access to the surround sound "Home Theater" in that room.  We've incrementally built up collections of photos, digitized music, and most recently educational materialsd, podcasts, etc. which threaten storage limits on a couple of default C: drives on these windows systems.  iTunes, before the latest upgrades, gave us some feedback on the file system connections it established without going to the properties of the individual files, but the latest one has yet to be figured out.  MP3's and photos behave fairly well, including the recently available connection between Adobe software and iTunes which magically appeared, allowing JPEG files indexed by Adobe Photoshop Elements running on the two fastest computers to show up under control on the Apple TV attached 56' Samsung screen! 
    Problems arose when we started trying to set things up so that purchases downloaded from the iTunes store ended up directly on the NAS, and when things downloaded to a specific iTunes library on one of the Windows boxes caused storage "issues" on that box.  A bigger problem looming in the immediate future is the "housecleaning" effort which is part of my set of "new years resolutions."  How do I get control of all of my collections and merge them on the NAS without duplicate files, or when, for example, we have .ACC and .MP3 versions, only the required "best" option for the specific piece of music becomes a candidate for streaming?
    I envission this consolidation effort as a "once in a lifetime" effort.  I'm 70, my wife is 68 and not as "technical" as I am, so documented procedures will be required. 
    I plan to keep this thread updated with progress and questions as this project proceeds.  Links to "how to" experiences which are well documented, etc. may be appreciated by those who follow it.  I plan to post progress reports and detailed issues going forward.   Please help?

    Step 1 - by trial and error...
    So far, I have been able to create physical files containing MP3 and JPG on the NAS using the Windows XP systems to copy from shared locations on the Vista and Win7 boxes.  This process has been aided by the use of a 600 GB SATA 2 capable hard drive enclosure.  I first attach to Win 7 or Win Vista and reboot to see the local drive spaces formatted on the portable device.  Then I copy files from the user's private directories to the public drive space.  When the portable drive is wired to an XP box, I can use Windows to move the files from the portable device to the NAS without any of the more advanced file attributes being copied to the NAS.  Once the files are on the NAS, I can add the new folder(s) to iTunes on any of the computers and voila, the data becomes sharable via iTunes.  So far, this works for anything that I have completely purchased, or for MP3's I made from the AIC files created when I purchased alblums via iTunes. 
    I have three huge boxes full of vynl records I've accumulated.  The ones that I've successfully digitized via a turntable attached to the sound card on one of my computers and third party software, have found their way to the NAS after being imported into iTunes and using it to bring down available album art work.  In general I've been reasonably well pleased with the sound quality of digital MP3 files created this way, but the software I've been using sometimes has serious problems automatically separating individual songs from the album tracks and re-converting "one at a time" isn't very efficient.

  • File systems available on Windows Server 2012 R2?

    What are the supported file systems in Windows Server 2012 R2? I mean the complete list. I know you can create, read and write on Fat32, NTFS and ReFS. What about non-Microsoft file systems, like EXT4 or HFS+? If I create a VM with a Linux OS, will
    I be able to acces the virtual hard disk natively from WS 2012 R2, or will I need a third party tool, like the one from Paragon? If I have a drive formated in EXT4 or HFS+, will I be able to acces it from Windows, without any third party tool? Acces it,
    I mean both read and write on them. I know that on the client OS, Windows 8.1, this is not possible natively, this is why I am asking here, I guess it is very possible for the server OS to have build-in support for accesing thoose file systems. If Hyper-V
    has been optimised to run not just Windows VMs, but also Linux VMs, it would make sense to me that file systems like thoose from Linux or OS X to be available using a build-in feature. I have tried to mount the vhd from a Linux VM I have created in HyperV,
    Windows Explorer could not read the hard drive.

    Installed Paragon ExtFS free. With it loaded, tried to mount on Windows Explorer a ext4 formated vhd, created on a Linux Hyper-V vm, it failed, and Paragon ExtFS crashed. Uninstalled Paragon ExtFS. The free version was not supported on WS 2012 R2
    by Paragon, if Windows has no build-in support for ext4, this means this free software has not messed around anything in the OS, I guess.
    Don't mess with third-party kernel-mode file systems as it's basically begging for troubles: crash inside them will make whole system BSOD and third-party FS are typically buggy... Because a) FS development for Windows is VERY complex and b) there are very
    few external adopters so not that many people actually theist them. What you can do however:
    1) Spawn an OS with a supported FS inside VM and configure loopback connectivity (even over SMB) with your host. So you'll read and write your volume inside a VM and copy content to / from host.
    (I personally use this approach in a reversed direction, my primary OS is MacOS X but I read/write NTFS-formatted disks from inside a Windows 7 VM I run on VMware Fusion)
    2) Use user-mode file system explorer (see sample links below, I'm NOT affiliated with that companie). So you'll copy content from the volume as it would be some sort of a shell extension.
    Crashes in 1) and 2) would not touch your whole OS stability. 
    HFS Explorer for Windows
    http://www.heise.de/download/hfsexplorer.html
    Ext2Read
    http://sourceforge.net/projects/ext2read/
    (both are user-land applications for HFS(+) and EXT2/3/4 accordingly)
    Hope this helped :)
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Folders not appearing in file system

    Ok, so here is my issue.  I have a law firm that asked me to set up a simple backup script for their network drive... back up all their case files to external hard drives.  The network is mostly Windows XP computers with one Windows 7 computer.
     The share drive is on a windows XP computer that I don't have direct access to normally (Its the attorney's office computer and it is rarely available to work on during the regular business day, which is the only time I have to work).  So, I created
    a batch file, which is left on the backup drives, to xcopy the files through a symbolically linked directory on the C drive.  I did that so I could set the same symbolic directory on all the computers and the script will work regardless of which computer
    they are plugged into.
    So, my script runs xcopy with the /d /v /y /e flags, plus an exclusion file excluding random things like $RECYCLE.BIN and /RECYCLER/.  Because a lot of the computers are XP, robocopy wasn't an option.  Regardless, I found that running robocopy
    created the same problem anyway.
    After copying everything over, the file system stops recognizing the backup directory.  My batch file is still there.  The exclusion text file is still there.  The Backup directory, however, just vanishes.  It almost like its hidden,
    but it doesn't show up when hidden files and directory are shown and, in the CLI, dir /ah doesn't pick it up either.  It is there though... trying to create another "Backup" directory at that location is rejected and, if I remember where things
    are, I can still cd into the directory structure.  I have tried restarting the external hardrive.  I've tried rebooting the computer.  I've tried refreshing the explorer.  Nothing works.  If I copy the files over, the directory that
    I copy them to just stops being recognized... every single time.  Not sure if it matters, but I formatted the hard drives to NTFS.  So, any ideas?
    As a side note... $300 for tech support?  Really?  What I'm doing is really basic and simple, and I'm not using anything that isn't integral to your software... xcopy.  You want me to pay $300 to report and fix bugs that are either in a fundamental
    component of your own command line application suite or with the compatibility of your file system?  "Lets get them to pay for something that doesn't work and then make them pay us to fix it!"  That is blatantly unethical.  In any
    other industry it wouldn't even come close to being legit.  If you paid a plumber to install piping in your house, and then your house flooded right after you turned the water on, is it reasonable to have to pay him again to fix the problems that he caused?
     Nobody would tolerate it... but somehow its a standard practice here.  Your solution is "Either pay us, or get the community of other users to help you".  There is a good reason why my entire business (not the law firm, but my business)
    runs on Linux and other open source software.  If we are being told to "fix it yourselves" anyway, then I at least want access to the source code so I can.  Seriously... I hate it when I have to come here to correct issues with MS overpriced,
    bloated, proprietary POS.

    If i read this correctly you are going the other way but copying files from Windows XP to a Windows 7 machine will cause networking to drop out on the Win7 machine, you have to do some registry tweaks for the networking for it to not drop out. I've run into
    that several times.
    If that has nothing to do with your issue the only thing i can think of is an issue with your script causing it to delete or not successfully copy the files. I've used xcopy /dicey for lots of things without an issue though. Perhaps it is something with
    your exclusions or symbolic links. Maybe drop the /v off and see if that makes a difference?

Maybe you are looking for