Default File Server in ODI

Hi All,
I am currently working on ODI tutorials .
By mistake I have deleted the default FILE_GENERIC file data server in the Physical archicture Of the topology nav studio.
how can I add back this default server for FILEs?
I have ODI11g on my laptop.
Please suggest
Rgds
S

Simply create new data server FILE_GENERIC with below jdbc details
com.sunopsis.jdbc.driver.file.FileDriver
jdbc:snps:dbfile

Similar Messages

  • Not able to select physicalschema directory for file data server in ODI 11g

    Hi,
    I am a beginner to ODI tech and stuck up with an error while doing a tutorial (mentioned in this link - http://st-curriculum.oracle.com/obe/fmw/odi/odi_11g/ODIproject_ff-to-ff/ODIproject_flatfile-to-flatfile.htm).
    While creating a physical schema for default file server(FILE_GENERIC) , I am not able to select schema directories and the name field with value 'FILE_GENERIC.Directory' is grayed out (non editable)
    I have gone through many documents but could not find any relevant information for this.
    So could you please let me know if any configurations required for this?
    Thanks,
    Anusha

    Hi Oleg,
    Thanks for your reply.
    While creation of physical schema , Name field is grayed out , is that the default behaviour of the screen? because in the tutorial I could see the name filead is pointing to a file directory path.
    Thanks,
    Anusha

  • Configure EP6SP9 KM in a file server instead of KM Database

    Hi All,
    We are at EP6SP9 in Windows 2003.
    As a part of my user requirment, I need to set up all my KM folders in a file repository and not in KM Database. The users would view this folders as KM Folders and need to utilise all the facilities offered by KM like Versioning, subscription and notification.
    I need to also integrate the central ADS Server as the user management engine (Windows Authentication) so that all the users automatically logon to the portal when they log-on to the system.TREX also has to index through this documents in the file server.
    Need some advice on the same.
    1) I have seen a document entitled "Integration of Windows File Services into SAP KM Platform using SSO and WebDAV Repository Manager". Is this the one that I have to use for setting the system up or any other suggestions. Has anyone tried connecting KM to an file server Database.
    2) Can I utilise all the KM Functionalities with the documents and folders residing in a file server.
    Would love to interact with anyone who has worked on the same.
    Regards,
    Rajan.K

    There's already lots of information on the subject right here on SDN. Here are a few pointers to get you started:
    CM repository documentation in SAP Help:
    http://help.sap.com/saphelp_nw04/helpdata/en/62/468698a8e611d5993600508b6b8b11/content.htm
    Weblog with step-by-step instructions on creating an FSDB repository:
    Creating a CM Repository Manager - FSDB
    I basically just followed the weblog and it worked. In fact, some steps in the weblog are not necessary if you intend to use the default "documents" repository. In that case, just switch to FSDB persistence mode, add your network paths and it should work after restarting the engine.
    Note that the contents of your repository will be deleted once you switch unless you backup your files to the FSDB root prior to that.
    Hope this helps.

  • Create a new web application, how shall I update the file server.xml

    Hi,
    I will create a new web application, i.e named newApp. Then I create a file structure as follows:
    - <server-root>/newApp
    - <server-root>/newApp/WEB-INF
    - <server-root>/newApp/WEB-INF/classes
    Then I must tell the server that I have created a new web application. Then I must update my file server.xml, How shall I do this and where in the file shall I type in the new information?
    I use windows XP Pro, and Tomcat 4.1.27.
    My server.xml file looks like below:
    <!-- Example Server Configuration File -->
    <!-- Note that component elements are nested corresponding to their
    parent-child relationships with each other -->
    <!-- A "Server" is a singleton element that represents the entire JVM,
    which may contain one or more "Service" instances. The Server
    listens for a shutdown command on the indicated port.
    Note: A "Server" is not itself a "Container", so you may not
    define subcomponents such as "Valves" or "Loggers" at this level.
    -->
    <Server port="8005" shutdown="SHUTDOWN" debug="0">
    <!-- Comment these entries out to disable JMX MBeans support -->
    <!-- You may also configure custom components (e.g. Valves/Realms) by
    including your own mbean-descriptor file(s), and setting the
    "descriptors" attribute to point to a ';' seperated list of paths
    (in the ClassLoader sense) of files to add to the default list.
    e.g. descriptors="/com/myfirm/mypackage/mbean-descriptor.xml"
    -->
    <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener"
    debug="0"/>
    <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener"
    debug="0"/>
    <!-- Global JNDI resources -->
    <GlobalNamingResources>
    <!-- Test entry for demonstration purposes -->
    <Environment name="simpleValue" type="java.lang.Integer" value="30"/>
    <!-- Editable user database that can also be used by
    UserDatabaseRealm to authenticate users -->
    <Resource name="UserDatabase" auth="Container"
    type="org.apache.catalina.UserDatabase"
    description="User database that can be updated and saved">
    </Resource>
    <ResourceParams name="UserDatabase">
    <parameter>
    <name>factory</name>
    <value>org.apache.catalina.users.MemoryUserDatabaseFactory</value>
    </parameter>
    <parameter>
    <name>pathname</name>
    <value>conf/tomcat-users.xml</value>
    </parameter>
    </ResourceParams>
    </GlobalNamingResources>
    <!-- A "Service" is a collection of one or more "Connectors" that share
    a single "Container" (and therefore the web applications visible
    within that Container). Normally, that Container is an "Engine",
    but this is not required.
    Note: A "Service" is not itself a "Container", so you may not
    define subcomponents such as "Valves" or "Loggers" at this level.
    -->
    <!-- Define the Tomcat Stand-Alone Service -->
    <Service name="Tomcat-Standalone">
    <!-- A "Connector" represents an endpoint by which requests are received
    and responses are returned. Each Connector passes requests on to the
    associated "Container" (normally an Engine) for processing.
    By default, a non-SSL HTTP/1.1 Connector is established on port 8080.
    You can also enable an SSL HTTP/1.1 Connector on port 8443 by
    following the instructions below and uncommenting the second Connector
    entry. SSL support requires the following steps (see the SSL Config
    HOWTO in the Tomcat 4.0 documentation bundle for more detailed
    instructions):
    * Download and install JSSE 1.0.2 or later, and put the JAR files
    into "$JAVA_HOME/jre/lib/ext".
    * Execute:
    %JAVA_HOME%\bin\keytool -genkey -alias tomcat -keyalg RSA (Windows)
    $JAVA_HOME/bin/keytool -genkey -alias tomcat -keyalg RSA (Unix)
    with a password value of "changeit" for both the certificate and
    the keystore itself.
    By default, DNS lookups are enabled when a web application calls
    request.getRemoteHost(). This can have an adverse impact on
    performance, so you can disable it by setting the
    "enableLookups" attribute to "false". When DNS lookups are disabled,
    request.getRemoteHost() will return the String version of the
    IP address of the remote client.
    -->
    <!-- Define a non-SSL Coyote HTTP/1.1 Connector on port 8080 -->
    <Connector className="org.apache.coyote.tomcat4.CoyoteConnector"
    port="8080" minProcessors="5" maxProcessors="75"
    enableLookups="true" redirectPort="8443"
    acceptCount="100" debug="0" connectionTimeout="20000"
    useURIValidationHack="false" disableUploadTimeout="true" />
    <!-- Note : To disable connection timeouts, set connectionTimeout value
    to -1 -->
    <!-- Define a SSL Coyote HTTP/1.1 Connector on port 8443 -->
    <!--
    <Connector className="org.apache.coyote.tomcat4.CoyoteConnector"
    port="8443" minProcessors="5" maxProcessors="75"
    enableLookups="true"
    acceptCount="100" debug="0" scheme="https" secure="true"
    useURIValidationHack="false" disableUploadTimeout="true">
    <Factory className="org.apache.coyote.tomcat4.CoyoteServerSocketFactory"
    clientAuth="false" protocol="TLS" />
    </Connector>
    -->
    <!-- Define a Coyote/JK2 AJP 1.3 Connector on port 8009 -->
    <Connector className="org.apache.coyote.tomcat4.CoyoteConnector"
    port="8009" minProcessors="5" maxProcessors="75"
    enableLookups="true" redirectPort="8443"
    acceptCount="10" debug="0" connectionTimeout="0"
    useURIValidationHack="false"
    protocolHandlerClassName="org.apache.jk.server.JkCoyoteHandler"/>
    <!-- Define an AJP 1.3 Connector on port 8009 -->
    <!--
    <Connector className="org.apache.ajp.tomcat4.Ajp13Connector"
    port="8009" minProcessors="5" maxProcessors="75"
    acceptCount="10" debug="0"/>
    -->
    <!-- Define a Proxied HTTP/1.1 Connector on port 8082 -->
    <!-- See proxy documentation for more information about using this. -->
    <!--
    <Connector className="org.apache.coyote.tomcat4.CoyoteConnector"
    port="8082" minProcessors="5" maxProcessors="75"
    enableLookups="true"
    acceptCount="100" debug="0" connectionTimeout="20000"
    proxyPort="80" useURIValidationHack="false"
    disableUploadTimeout="true" />
    -->
    <!-- Define a non-SSL legacy HTTP/1.1 Test Connector on port 8083 -->
    <!--
    <Connector className="org.apache.catalina.connector.http.HttpConnector"
    port="8083" minProcessors="5" maxProcessors="75"
    enableLookups="true" redirectPort="8443"
    acceptCount="10" debug="0" />
    -->
    <!-- Define a non-SSL HTTP/1.0 Test Connector on port 8084 -->
    <!--
    <Connector className="org.apache.catalina.connector.http10.HttpConnector"
    port="8084" minProcessors="5" maxProcessors="75"
    enableLookups="true" redirectPort="8443"
    acceptCount="10" debug="0" />
    -->
    <!-- An Engine represents the entry point (within Catalina) that processes
    every request. The Engine implementation for Tomcat stand alone
    analyzes the HTTP headers included with the request, and passes them
    on to the appropriate Host (virtual host). -->
    <!-- You should set jvmRoute to support load-balancing via JK/JK2 ie :
    <Engine name="Standalone" defaultHost="localhost" debug="0" jmvRoute="jvm1">
    -->
    <!-- Define the top level container in our container hierarchy -->
    <Engine name="Standalone" defaultHost="localhost" debug="0">
    <!-- The request dumper valve dumps useful debugging information about
    the request headers and cookies that were received, and the response
    headers and cookies that were sent, for all requests received by
    this instance of Tomcat. If you care only about requests to a
    particular virtual host, or a particular application, nest this
    element inside the corresponding <Host> or <Context> entry instead.
    For a similar mechanism that is portable to all Servlet 2.3
    containers, check out the "RequestDumperFilter" Filter in the
    example application (the source for this filter may be found in
    "$CATALINA_HOME/webapps/examples/WEB-INF/classes/filters").
    Request dumping is disabled by default. Uncomment the following
    element to enable it. -->
    <!--
    <Valve className="org.apache.catalina.valves.RequestDumperValve"/>
    -->
    <!-- Global logger unless overridden at lower levels -->
    <Logger className="org.apache.catalina.logger.FileLogger"
    prefix="catalina_log." suffix=".txt"
    timestamp="true"/>
    <!-- Because this Realm is here, an instance will be shared globally -->
    <!-- This Realm uses the UserDatabase configured in the global JNDI
    resources under the key "UserDatabase". Any edits
    that are performed against this UserDatabase are immediately
    available for use by the Realm. -->
    <Realm className="org.apache.catalina.realm.UserDatabaseRealm"
    debug="0" resourceName="UserDatabase"/>
    <!-- Comment out the old realm but leave here for now in case we
    need to go back quickly -->
    <!--
    <Realm className="org.apache.catalina.realm.MemoryRealm" />
    -->
    <!-- Replace the above Realm with one of the following to get a Realm
    stored in a database and accessed via JDBC -->
    <!--
    <Realm className="org.apache.catalina.realm.JDBCRealm" debug="99"
    driverName="org.gjt.mm.mysql.Driver"
    connectionURL="jdbc:mysql://localhost/authority"
    connectionName="test" connectionPassword="test"
    userTable="users" userNameCol="user_name" userCredCol="user_pass"
    userRoleTable="user_roles" roleNameCol="role_name" />
    -->
    <!--
    <Realm className="org.apache.catalina.realm.JDBCRealm" debug="99"
    driverName="oracle.jdbc.driver.OracleDriver"
    connectionURL="jdbc:oracle:thin:@ntserver:1521:ORCL"
    connectionName="scott" connectionPassword="tiger"
    userTable="users" userNameCol="user_name" userCredCol="user_pass"
    userRoleTable="user_roles" roleNameCol="role_name" />
    -->
    <!--
    <Realm className="org.apache.catalina.realm.JDBCRealm" debug="99"
    driverName="sun.jdbc.odbc.JdbcOdbcDriver"
    connectionURL="jdbc:odbc:CATALINA"
    userTable="users" userNameCol="user_name" userCredCol="user_pass"
    userRoleTable="user_roles" roleNameCol="role_name" />
    -->
    <!-- Define the default virtual host -->
    <Host name="localhost" debug="0" appBase="webapps"
    unpackWARs="true" autoDeploy="true">
    <!-- Normally, users must authenticate themselves to each web app
    individually. Uncomment the following entry if you would like
    a user to be authenticated the first time they encounter a
    resource protected by a security constraint, and then have that
    user identity maintained across all web applications contained
    in this virtual host. -->
    <!--
    <Valve className="org.apache.catalina.authenticator.SingleSignOn"
    debug="0"/>
    -->
    <!-- Access log processes all requests for this virtual host. By
    default, log files are created in the "logs" directory relative to
    $CATALINA_HOME. If you wish, you can specify a different
    directory with the "directory" attribute. Specify either a relative
    (to $CATALINA_HOME) or absolute path to the desired directory.
    -->
    <!--
    <Valve className="org.apache.catalina.valves.AccessLogValve"
    directory="logs" prefix="localhost_access_log." suffix=".txt"
    pattern="common" resolveHosts="false"/>
    -->
    <!-- Logger shared by all Contexts related to this virtual host. By
    default (when using FileLogger), log files are created in the "logs"
    directory relative to $CATALINA_HOME. If you wish, you can specify
    a different directory with the "directory" attribute. Specify either a
    relative (to $CATALINA_HOME) or absolute path to the desired
    directory.-->
    <Logger className="org.apache.catalina.logger.FileLogger"
    directory="logs" prefix="localhost_log." suffix=".txt"
    timestamp="true"/>
    <!-- Define properties for each web application. This is only needed
    if you want to set non-default properties, or have web application
    document roots in places other than the virtual host's appBase
    directory. -->
         <DefaultContext reloadable="true"/>
    <!-- Tomcat Root Context -->
    <Context path="" docBase="ROOT" debug="0"/>
    <!-- Tomcat Examples Context -->
    <Context path="/examples" docBase="examples" debug="0"
    reloadable="true" crossContext="true">
    <Logger className="org.apache.catalina.logger.FileLogger"
    prefix="localhost_examples_log." suffix=".txt"
    timestamp="true"/>
    <Ejb name="ejb/EmplRecord" type="Entity"
    home="com.wombat.empl.EmployeeRecordHome"
    remote="com.wombat.empl.EmployeeRecord"/>
    <!-- If you wanted the examples app to be able to edit the
    user database, you would uncomment the following entry.
    Of course, you would want to enable security on the
    application as well, so this is not done by default!
    The database object could be accessed like this:
    Context initCtx = new InitialContext();
    Context envCtx = (Context) initCtx.lookup("java:comp/env");
    UserDatabase database =
    (UserDatabase) envCtx.lookup("userDatabase");
    -->
    <!--
    <ResourceLink name="userDatabase" global="UserDatabase"
    type="org.apache.catalina.UserDatabase"/>
    -->
    <!-- PersistentManager: Uncomment the section below to test Persistent
    Sessions.
    saveOnRestart: If true, all active sessions will be saved
    to the Store when Catalina is shutdown, regardless of
    other settings. All Sessions found in the Store will be
    loaded on startup. Sessions past their expiration are
    ignored in both cases.
    maxActiveSessions: If 0 or greater, having too many active
    sessions will result in some being swapped out. minIdleSwap
    limits this. -1 or 0 means unlimited sessions are allowed.
    If it is not possible to swap sessions new sessions will
    be rejected.
    This avoids thrashing when the site is highly active.
    minIdleSwap: Sessions must be idle for at least this long
    (in seconds) before they will be swapped out due to
    activity.
    0 means sessions will almost always be swapped out after
    use - this will be noticeably slow for your users.
    maxIdleSwap: Sessions will be swapped out if idle for this
    long (in seconds). If minIdleSwap is higher, then it will
    override this. This isn't exact: it is checked periodically.
    -1 means sessions won't be swapped out for this reason,
    although they may be swapped out for maxActiveSessions.
    If set to >= 0, guarantees that all sessions found in the
    Store will be loaded on startup.
    maxIdleBackup: Sessions will be backed up (saved to the Store,
    but left in active memory) if idle for this long (in seconds),
    and all sessions found in the Store will be loaded on startup.
    If set to -1 sessions will not be backed up, 0 means they
    should be backed up shortly after being used.
    To clear sessions from the Store, set maxActiveSessions, maxIdleSwap,
    and minIdleBackup all to -1, saveOnRestart to false, then restart
    Catalina.
    -->
    <!--
    <Manager className="org.apache.catalina.session.PersistentManager"
    debug="0"
    saveOnRestart="true"
    maxActiveSessions="-1"
    minIdleSwap="-1"
    maxIdleSwap="-1"
    maxIdleBackup="-1">
    <Store className="org.apache.catalina.session.FileStore"/>
    </Manager>
    -->
    <Environment name="maxExemptions" type="java.lang.Integer"
    value="15"/>
    <Parameter name="context.param.name" value="context.param.value"
    override="false"/>
    <Resource name="jdbc/EmployeeAppDb" auth="SERVLET"
    type="javax.sql.DataSource"/>
    <ResourceParams name="jdbc/EmployeeAppDb">
    <parameter><name>username</name><value>sa</value></parameter>
    <parameter><name>password</name><value></value></parameter>
    <parameter><name>driverClassName</name>
    <value>org.hsql.jdbcDriver</value></parameter>
    <parameter><name>url</name>
    <value>jdbc:HypersonicSQL:database</value></parameter>
    </ResourceParams>
    <Resource name="mail/Session" auth="Container"
    type="javax.mail.Session"/>
    <ResourceParams name="mail/Session">
    <parameter>
    <name>mail.smtp.host</name>
    <value>localhost</value>
    </parameter>
    </ResourceParams>
    <ResourceLink name="linkToGlobalResource"
    global="simpleValue"
    type="java.lang.Integer"/>
    </Context>
    </Host>
    </Engine>
    </Service>
    </Server>

    To use servlets u have indeed to update your web.xml...Well I'm not sure this is relevant to your case anyway.
    You have to add a <servlet> element to this file.
    Something like this:
    <servlet>
    <servlet-name>blabla</servlet-name>
    <servlet-class>blablapackage.Blablaclass</servlet-class>
    <init-param>...</init-param>
    </servlet>
    Now this may not solve your problem. Make sure you refer to your servlets using their full qualified names.btw, just to be sure, what is your definition of "servlet"? (i mean: any java class or only javax.servlet.Servlet)

  • How can I use Airport Extreme just as a file server - no wifi

    I'm setting up a new Apple environment for an invalid music buff.  Here are the components, all newly purchased.
    MacBook Pro
    Airport Extreme
    Airport Express
    I was handed a G-Drive with a large iTunes file collection on it.
    The user will have Comcast as his network provider and he already has a Comcast wireless router, so none of the Apple devices will need to be used as a wireless router or base station (assuming I understand what Apple means by a "base station").
    The Airport Express has the primary function of acting as an AirPlay device, feeding an analog music signal to the user's hi-end audio system (Denon integrated amplifier, Thiel Loudspeakers).
    The Airport Extreme has the sole function of acting as a network file share for the G-Drive.
    The MacBook Pro will operate iTunes and will get its music files from the network shared G-Drive and send it to the AirPlay device, which will feed the music signal to the hi-end audio system.
    I am able to get the Airport Express to simply act as a wireless client on the Comcast provided wireless network.  The MacBook Pro is also on the network and can see the Airport Express as an AirPlay device and send music to the audio system.
    What I'm having problems with is getting the Airport Extreme to simply act as a file server on this LAN.  It can either work in bridged mode and just be a wireless network client, or I can readily run an ethernet cable from the Airport Extreme to the Airport Express, or even to the Comcast router. 
    I tried to connect the MacBook Pro to the Airport Extreme with an ethernet cable and get it to "join an existing network", but it refused to recognize the Comcast provided network.  All it could see were some networks in homes nearby.
    I was hoping that if I just ran a network cable between the Airport Extreme and the Airport Express, that it would just get a network address over DHCP through the Airport Express.  Nope.
    So I'm stuck.   How do I get the Airport Express to simply act as a file server for the G-Drive on my LAN?

    +Can I use the Airport Extreme base station as a wired router, with wireless disabled for the time being?+
    Sure, but it will probably take just as much time...a few minutes....to turn the wireless off than simply change the name on the default wireless network to your personal choice.
    The AirPort Extreme is pre-configured to create a wireless network when it is hooked up to a modem. All you need do is assign a name to this network and establish a password. If you don't want to do this, you can turn the wireless off and use the device as a wired only router.
    AirPort Utility, the application that is used to setup the AirPort Extreme has a simple guided step by step process for you to configure the device the way you want.

  • How can I make a file server sync with a laptop when it connectsSOLVED

    Ok, first let me say that I looked on the Internet for this but didn't find what I was looking for. I'm going to be setting up a simple file server for my in laws soon...they use windows so I'll be using samba on the server side.
    They have 3 computers and want to be able to have all there files stored in one central place so they can share them easily and also have them backed up at the same time. They have 2 desktops and one laptop.
    The desktops are not a big deal to sync with the server because they are on all the time and I can just use a cron job or something. Then problem is the laptop..
    What can I do to make the laptop automatically sync when it gets on the network? 
    The catch is this needs to be ran from the server side so they have a "hands free" experience or whatever. If you know a way for the laptop to do this in the background I guess that would work too.
    I know this is probably a very simple task but so far everything I find the user would have to manually start the sync.
    Thanks in advance for you help!
    Last edited by jonrd (2008-07-27 04:28:57)

    Is it necessary that all the files are on the local hard disks? In the case of the laptop I could understand this if the laptop is also used outside the network, but the desktops should stay in place, no?
    It could be as easy as to make a shortcut on all the computers to the samba-share and tell the inlaws that they have to use the samba-share to store their (shared) files.
    You are correct about this but I doubt they would remember to use this all the time so I want to sync the folders just in case.
    Assuming the laptop is always going to be in the same IP address when it connects to the network (which is unlikely in default configurations), you could create a script that checks for the existence of the machine on the network then perform a sync.  I had a script written in Python somewhere that would check to make sure the server side was up and run unison, but it could be modified to check for the laptop and copy files.  Let me know if that's something you're interested in and I can post it for you.
    This sounds like what I'm looking for. Also I wander If it could be done by NetBIOS  name instead of ip address? This way It wouldn't matter.
    The script sounds like what I'm looking for though
    Thanks for the help!

  • Configuring the file server in KM and access,edit the documents from it

    Hi friends,
      My requirement is to configure the file server where u will have all the structured and unstructured data stored here. So users can share the documents and create, edit ,save the documents from the file server itself.
    In KM what kind of file servers are there apart from the one it supports by default.
    Can anybody pls provide the configuration steps regarding how to configure the file server i KM.
    To configure the file server is webDAV protocol required?
    Points would be assigned for the helpful answer.
    Thanks in advance.
    Regards
    Sireesha.

    Dear Sireesha,
    Well KM supports mostly all the File server however we have some restrictions with Novell FS and Sharepoint Server from Microsoft. Like versions and other meta data have some issues.
    Alsothough to configure a File Server you need to first create a FS repository Manager. Details can be found in the help guide:
    <a href="http://help.sap.com/saphelp_nw04/helpdata/en/e3/92322ab24e11d5993800508b6b8b11/frameset.htm">FS Repository Manager</a>
    Yes WebDAV protocol is required here.
    You can create WebDav RM as well.
    <a href="http://help.sap.com/saphelp_nw04/helpdata/en/4a/217fb6c33c6748a1715a161ac942cd/frameset.htm">WEBDAV</a>
    The above links will help answer your queries.
    Regards
    Anjali

  • File sharing in the File server preference pane

    System preferences to File Server to File sharing. When I choose a share point folder and want to change permissions, I click the radio button to restrict access to certain users and save. When I re-enter the the folder it defaults back to let everybody have access. If I try to delete the folder it won't delete. When I create a new folder by re-choosing the folder, I get <folder Name>1 and then I cannot delete it. Is there a plist in preferences I can delete to clear this problem?

    Hello Peter,
    Thanks for your answer.
    There must be something wrong in the different steps I follow.
    Firts I set Time machine on the server for each user, shoosing the Volume "SmartStor" (a network volume attached on the Mac Mini server) it creates a "Shared items" folder with a "Backups" folder in.
    In Server Admin tools I can see in the AFP services the share point /Backups
    In Workgroup Manager I set the Time Machine's preferences for the different computers and computers' group with the path afp://myserver.local/Backups/
    At least on my client's computer I can shoose the volume "Backups" "on my server".
    And it doesn't work... even with all the logins and password, User's, Dir Admin and Server Admin.
    With User's and Dir Admin it runs quickly and tells "the volume is not reachable on account of wrong user's name or password"
    With Server Admin login and password it runs indefinitly, nothing occurs ...
    Is there something to do in terminal ?
    Well I tried to find explanations on the web but I found nothing for the moment... The informations are to set a network volume in Terminal in place of the default one...
    'Hope to find soon, I can't stand that something runs for every one but not with me...

  • Server 2012 File Server Cluster Shadow Copies Disappear Some Time After Failover

    Hello,
    I've seen similar questions posted on here before however I have yet to find a solution that worked for us so I'm adding my process in hopes someone can point out where I went wrong.
    The problem: After failover, shadow copies are only available for a short time on the secondary server.  Before the task to create new shadow copies happens the shadow copies are deleted.  Failing back shows them missing on the primary server as
    well when this happens.
    We have a 2 node (hereafter server1 and server2) cluster with a quorum disk.  There are 8 disk resources which are mapped to the cluster via iScsi.  4 of these disks are setup as storage and the other 4 are currently set up as shadow copy volumes
    for their respective storage volume.
    Previously we weren't using separate shadow copy volumes and seeing the same issue described in the topic title.  I followed two other topics on here that seemed close and then setup the separate shadow copy volumes however it has yet to alleviate the
    issue.  These are the two other topics :
    Topic 1: https://social.technet.microsoft.com/Forums/windowsserver/en-US/ba0d2568-53ac-4523-a49e-4e453d14627f/failover-cluster-server-file-server-role-is-clustered-shadow-copies-do-not-seem-to-travel-to?forum=winserverClustering
    Topic 2: https://social.technet.microsoft.com/Forums/windowsserver/en-US/c884c31b-a50e-4c9d-96f3-119e347a61e8/shadow-copies-missing-after-failover-on-2008-r2-cluster
    After reading both of those topics I did the following:
    1) Add the 4 new volumes to the cluster for shadow copies
    2) Made each storage volume dependent on it's shadow copy volume in FCM
    3) Went to the currently active node directly and opened up "My Computer", I then went to the properties of each storage volume and set up shadow copies to go to the respective shadow copy volume drive letter with correct size for spacing, etc.
    4) I then went back to FCM and right clicked on the corresponding storage volume and choose "Configure Shadow Copy" and set the schedule for 12:00 noon and 5:00 PM.
    5) I noticed that on the nodes the task was created and that the task would failover between the nodes and appeared correct.
    6) Everything appears to failover correctly, all volumes come up, drive letters are same, shadow copy storage settings are the same, and 4 scheduled tasks for shadow copy appear on the current node after failover.
    Thinking everything was setup according to best practice I did some testing by changing file contents throughout the day making sure that previous versions were created as scheduled on server1.  I then rebooted Server1 to simulate failure.  Server2
    picked up the role within about 10 seconds and files were avaiable.  I checked and I could still see previous versions for the files after failover that were created on server1.  Unfortunately that didn't last as the next day before noon I was going
    to make more changes to files to ensure that not only could we see the shadow copies that were created when Server1 owned the file server role but also that the copies created on Server2 would be seen on failback.  I was disappointed to discover that
    the shadow copies were all gone and failing back didn't produce them either.
    Does anyone have any insight into this issue?  I must be missing a switch somewhere or perhaps this isn't even possible with our cluster type based on this: http://technet.microsoft.com/en-us/library/cc779378%28v=ws.10%29.aspx
    Now here's an interesting part, shadow copies on 1 of our 4 volumes have been retained from both nodes through the testing, but I can't figure out what makes it different though I do suspect that perhaps the "Disk#s" in computer management / disk
    management perhaps need to be the same between servers?  For example, on server 1 the disk #s for cluster volume 1 might be "Disk4" but on server 2 the same volume might be called "Disk7", however I think that operations like this
    and shadow copy are based on the disk GUID and perhaps this shouldn't matter.
    Edit, checked on the disk numbers, I see no correlation between what I'm seeing in shadow copy and what is happening to the numbers.  All other items, quotas, etc fail and work correctly despite these diffs:
    Disk Numbers on Server 1:
    Format: "shadow/storerelation volume = Disk Number"
    aHome storage1 =   16 
    aShared storage2 = 09
    sHome storage3 =   01
    sShared storage4 = 04
    aHome shadow1 =   10
    aShared shadow2 = 11
    sHome shadow3 =   02
    sShared shadow4 = 05
    Disk numbers on Server 2:
    aHome storage1 = 16 (SAME)
    aShared storage2 = 04 (DIFF)
    sHome storage3 = 05 (DIFF)
    sShared storage4 = 08 (DIFF)
    aHome shadow1 = 10 (SAME)
    aShared shadow2 = 11 (SAME)
    sHome shadow3 = 06 (DIFF)
    sShared shadow4 = 09 (DIFF)
    Thanks in advance for your assistance/guidance on this matter!

    Hello Alex,
    Thank you for your reply.  I will go through your questions in order as best I can, though I'm not the backup expert here.
    1) "Did you see any event ID when the VSS fail?
    please offer us more information about your environment, such as what type backup you are using the soft ware based or hard ware VSS device."
    I saw a number of events on inspection.  Interestingly enough, the event ID 60 issues did not occur on the drive where shadow copies did remain after the two reboots.  I'm putting my event notes in a code block to try to preserve formatting/readability.
     I've written down events from both server 1 and 2 in this code block, documenting the first reboot causing the role to move to server 2 and then the second reboot going back to server 1:
    JANUARY 2
    9:34:20 PM - Server 1 - Event ID: 1074 - INFO - Source: User 32 - Standard reboot request from explorer.exe (Initiated by me)
    9:34:21 PM - Server 1 - Event ID: 7036 - INFO - Source: Service Control Manager - "The Volume Shadow Copy service entered the running state."
    9:34:21 PM - Server 1 - Event ID: 60 - ERROR - Source: volsnap - "The description for Event ID 60 from source volsnap cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer.
    If the event originated on another computer, the display information had to be saved with the event.
    The following information was included with the event:
    \Device\HarddiskVolumeShadowCopy49
    F:
    T:
    The locale specific resource for the desired message is not present"
    9:34:21 PM - Server 1 - Event ID 60 - ERROR - Source: volsnap - "The description for Event ID 60 from source volsnap cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer.
    If the event originated on another computer, the display information had to be saved with the event.
    The following information was included with the event:
    \Device\HarddiskVolumeShadowCopy1
    H:
    V:
    The locale specific resource for the desired message is not present"
    ***The above event repeats with only the number changing, drive letters stay same, citing VolumeShadowCopy# numbers 6, 13, 18, 22, 27, 32, 38, 41, 45, 51,
    9:34:21 PM - Server 1 - Event ID: 60 - ERROR - Source: volsnap - "The description for Event ID 60 from source volsnap cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer.
    If the event originated on another computer, the display information had to be saved with the event.
    The following information was included with the event:
    \Device\HarddiskVolumeShadowCopy4
    E:
    S:
    The locale specific resource for the desired message is not present"
    ***The above event repeats with only the number changing, drive letters stay same, citing VolumeShadowCopy# numbers 5, 10, 19, 21, 25, 29, 37, 40, 46, 48, 48
    9:34:28 PM - Server 1 - Event ID: 7036 - INFO - Source: Service Control Manager - "The NetBackup Legacy Network Service service entered the stopped state."
    9:34:28 PM - Server 1 - Event ID: 7036 - INFO - Source: Service Control Manager - "The Volume Shadow Copy service entered the stopped state.""
    9:34:29 PM - Server 1 - Event ID: 7036 - INFO - Source: Service Control Manager - "The NetBackup Client Service service entered the stopped state."
    9:34:30 PM - Server 1 - Event ID: 7036 - INFO - Source: Service Control Manager - "The NetBackup Discovery Framework service entered the stopped state."
    10:44:07 PM - Server 2 - Event ID: 7036 - INFO - Source: Service Control Manager - "The Volume Shadow Copy service entered the running state."
    10:44:08 PM - Server 2 - Event ID: 7036 - INFO - Source: Service Control Manager - "The Microsoft Software Shadow Copy Provider service entered the running state."
    10:45:01 PM - Server 2 - Event ID: 48 - ERROR - Source: bxois - "Target failed to respond in time to a NOP request."
    10:45:01 PM - Server 2 - Event ID: 20 - ERROR - Source: bxois - "Connection to the target was lost. The initiator will attempt to retry the connection."
    10:45:01 PM - Server 2 - Event ID: 153 - WARN - Source: disk - "The IO operation at logical block address 0x146d2c580 for Disk 7 was retried."
    10:45:03 PM - Server 2 - Event ID: 34 - INFO - Source: bxois - "A connection to the target was lost, but Initiator successfully reconnected to the target. Dump data contains the target name."
    JANUARY 3
    At around 2:30 I reboot Server 2, seeing that shadow copy was missing after previous failure. Here are the relevant events from the flip back to server 1.
    2:30:34 PM - Server 2 - Event ID: 60 - ERROR - Source: volsnap - "The description for Event ID 60 from source volsnap cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer.
    If the event originated on another computer, the display information had to be saved with the event.
    The following information was included with the event:
    \Device\HarddiskVolumeShadowCopy24
    F:
    T:
    The locale specific resource for the desired message is not present"
    2:30:34 PM - Server 2 - Event ID: 60 - ERROR - Source: volsnap - "The description for Event ID 60 from source volsnap cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer.
    If the event originated on another computer, the display information had to be saved with the event.
    The following information was included with the event:
    \Device\HarddiskVolumeShadowCopy23
    E:
    S:
    The locale specific resource for the desired message is not present"
    We are using Symantec NetBackup.  The client agent is installed on both server1 and 2.  We're backing them up based on the complete drive letter for each storage volume (this makes recovery easier).  I believe this is what you would call "software
    based VSS".  We don't have the infrastructure/setup to do hardware based snapshots.  The drives reside on a compellent san mapped to the cluster via iScsi.
    2) "Confirm the following registry is exist:
    - HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\VSS\Settings"
    The key is there, however the DWORD value is not, would that mean that the
    default value is being used at this point?

  • How to set default file path while downloading alv output

    Hi,
    Can anyone tell me that how to set default file path while downloading the ALV output to system using Local file button on tool bar.
    Please look below screenshots:
    Kindly help me resolve it.
    Thanks in advance.
    Regards,
    Ashutosh Katara

    This information initial value is (maybe) stored in Windows Register (register.exe) at Software\SAP\SAPGUI Front\SAP Frontend Server\Filetransfer -> PathDownload, you can read it thru class CL_GUI_FRONTEND_SERVICES method GET_UPLOAD_DOWNLOAD_PATH and update it thru method REGISTRY_SET_VALUE. (Else there may be some parameter ID to force data, but I'm no longer sure)
    Regards,
    Raymond

  • Can anyone help me?Creating the shadow copies in the file server cluster ,there are some errors occured, OS version is WSS 2012

    I construct a failover cluster(file server,AP module) for sharing files by WSS 2012,and I want to use the shadow copies to backup my data,but when making  a shadow copies on the volume which  is added to the cluster(not the CSV,just add
    it to the cluster and use it to share files,it plays the role of file server),there are some errors occured, these errors result in the shadow copies failed,error likes the following pictures:
    1: the disk F is added to the cluster,first I make the shadow copies by click the right key of mouse on the disk F,and chouse the configeration shadow copies,and click the settings, then click the schedule , and I wait just a few seconds, the error is appeared,like
    this picture 1, the wait operation timed out,and then ,
    I click the schedule button once again,a different error occured,like the following picture," the object already exists",if i don't set the schedule at first ,use the default shedule,and click the enable button also the same  error must 
    accure
    but the only diffrence is that, a shadow copy time point is created, also ,you can make the shadow copies by click " create now", that is creating the shadow copies manually. Although it can succesfully make the shadow copies, but when I select
    a time point to revert, error  occured, "A volume that contains operating system files or resides on a cluster shared disk cannot be reverted" 
    In a word,all the errors above make the shadow copies by schedulling failed,except making the shadow copies manually,but what makes me confused is that I have ever maked the shadow copies succesfully by schedul a policy,I don't know what makes it succesful,
    it's small probability, most of time ,it's failed.No matter what kind of situation, revert must be failed.
    I'm sorry for my pool english , it's the first time for me getting help in forum by english ,I don't know if I descripe my question clearly, also ,other method like the link
    http://technet.microsoft.com/en-us/library/cc784118(v=ws.10).aspx I have tried,but the same errors occured.Can anyone tell me How can I make the shadow copies in File Server
    cluster(AP module)?And I make a mistake in operating? Looking forward for your reply.Thanks!

    Hi,
    Please check the following 2 places:
    HKEY_LOCAL_MACHINE\Cluster\Tasks
    C:\Windows\System32\Tasks
    First please compare permission settings of the folder C:\Windows\System32\Tasks with a working computer. Correct permission settings if anything wrong. Specifically, confirm you current account do have permission on this folder.
    As it said "object already exists", find the schedules you created before, backup and delete all these schedules in both registry key and folder.
    Then test to create a new schedule to see if issue still exists.
    Meanwhile what kind of storage device you are using? The issue could occur on specific storage device, so test to enable shadow copy on a local disk to see if that will work.
    TechNet Subscriber Support in forum |If you have any feedback on our support, please contact [email protected]
    Thank you for you reply.On the local volume,all of these errors will not occur, but the volume in the file server cluser.There is no value in HKEY_LOCAL_MACHINE\Cluster\Tasks. On local volume, everything goes well about the shadow copy, so I do not
    think something is wrong about the permission settings of the folder C:\Windows\System32\Tasks.Storage device  is a SAN,we use RAID6 and provide the LUNs to the NAS engine, and the make the volume on these LUNs, Is Angthing wrong? Hope for you
    reply~~

  • How to set the default file name for upload

    Hi All,
    I have the following BSP app for a file upload of a csv file. I want the page when displayed show the default file name to be loaded as c:\db1\currentPM.csv
    What changes do I need to make to get the default file name in the BSP app.
    Thanks
    Karen
    <%@page language="abap" %>
    <%@extension name="htmlb" prefix="htmlb" %>
    <htmlb:content id               = "content"
                   design           = "classicdesign2002design2003"
                   controlRendering = "sap"
                   rtlAutoSwitch    = "true" >
      <htmlb:page title="File Upload " >
        <htmlb:form method       = "post"
                    encodingType = "multipart/form-data">
              <htmlb:textView text   = "File:"
                              design = "STANDARD" />
              <htmlb:fileUpload id          = "uploadID"
                                onUpload    = "UploadFile"
                                upload_text ="Upload"/>
        </htmlb:form>
      </htmlb:page>
    </htmlb:content>
    On Input Processing
    event handler for checking and processing user input and
    for defining navigation
    DATA: EVENT TYPE REF TO IF_HTMLB_DATA,
          DATA TYPE REF TO CL_HTMLB_FILEUPLOAD,
          LV_OUTPUT_LENGTH TYPE I,
          LV_TEXT_BUFFER TYPE STRING,
          FILE_NAME TYPE STRING,
          FILE_PATH TYPE STRING ,
          INTERN TYPE TABLE OF  ZALSMEX_TABLINE.
    DATA: LT_BINARY_TAB TYPE TABLE OF SDOKCNTBIN .
    TYPES: BEGIN OF TY_TAB,
           FIELD1(2) TYPE C,
           FIELD2(2) TYPE C,
           FIELD3(2) TYPE C,
           FIELD4(2) TYPE C,
           FIELD5(2) TYPE C,
           END OF TY_TAB.
    DATA:  WA_TAB TYPE TY_TAB,
           IT_TAB TYPE TABLE OF TY_TAB.
    TYPES: BEGIN OF TY_LINE,
              LINE(255) TYPE C,
           END OF TY_LINE.
    DATA:  WA_LINE TYPE TY_LINE,
           IT_LINE TYPE TABLE OF TY_LINE.
    EVENT = CL_HTMLB_MANAGER=>GET_EVENT_EX( REQUEST ).
    IF EVENT IS NOT INITIAL AND EVENT->EVENT_NAME = HTMLB_EVENTS=>FILEUPLOAD.
      DATA ?= CL_HTMLB_MANAGER=>GET_DATA( REQUEST = RUNTIME->SERVER->REQUEST NAME = 'fileUpload' ID = 'uploadID' ).
      FILE_NAME = DATA->FILE_NAME.
      FILE_PATH = FILE_NAME.
      IF DATA IS NOT INITIAL.
        CALL FUNCTION'SCMS_XSTRING_TO_BINARY'
         EXPORTING BUFFER = DATA->FILE_CONTENT
         IMPORTING OUTPUT_LENGTH = LV_OUTPUT_LENGTH
         TABLES BINARY_TAB = LT_BINARY_TAB .
        CALL FUNCTION'SCMS_BINARY_TO_STRING'
        EXPORTING INPUT_LENGTH = LV_OUTPUT_LENGTH
         IMPORTING TEXT_BUFFER = LV_TEXT_BUFFER
         TABLES
         BINARY_TAB = LT_BINARY_TAB.
        IF SY-SUBRC = 0.
    MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
    WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
        ENDIF.
        SPLIT LV_TEXT_BUFFER AT CL_ABAP_CHAR_UTILITIES=>CR_LF INTO TABLE IT_LINE.
        IF SY-SUBRC = 0.
          LOOP AT IT_LINE INTO WA_LINE.
           SPLIT WA_LINE AT CL_ABAP_CHAR_UTILITIES=>HORIZONTAL_TAB
            split wa_line at ','
            INTO WA_TAB-FIELD1 WA_TAB-FIELD2 WA_TAB-FIELD3 WA_TAB-FIELD4 WA_TAB-FIELD5.
            append wa_tab to it_tab.
          ENDLOOP.
        ENDIF.
      ENDIF.
    ENDIF.

    Also, I missed another point.
    In the folder c:\dbdata I have a number of CSV files on the user frontend. I would like the BSP application to get the list of files in the folder and process them one after the other. How can I get the list of files on the folder in the user PC and how to process them one after the other.
    I want the form to display only the default folder and once I press on upload it must process all files and display the status of processing on the same page.
    Please kindly share ideas how I can implement this app.
    Thanks
    Karen

  • Passing Dynamic File Name to ODI nterface for processing to another system

    Hi,
    I need help regarding passing a Dynamically Name changing fixed length Flat File in ODI Interface. This interface is built for taking the Flat File as Input and process it to SQL Server by applying Data Mapping and transformations... The input Flat File Name is sequence generated for eg: OEORD1123.txt and next file will be OEORD1124.txt and it sits in Oracle Conc tier. How to pass the latest file name to ODI interface for processing
    Regards,
    Anil..

    Hi Guys...
    I would like to suggest a way.
    a) create a single interface with dynamic resouce name (a ODI variable) with a filter to the month column like:
    month_column = '#vCountMonth'
    b) in the refresh tab of a first variable (I named "vMonth"), use the following query: (varialbe should be alphanumeric, "not persistent")
    select to_char(to_date('#vCountMonth','MM'),'month') from dual
    c) create one more ODI variable (I named "vCountMonth"), alphanumeric, not persistent and at its refresh tab write:
    select lpad(to_char(#vCountMonth + 1), 2, '0') from dual
    d) now just create a package, drag and drop the objects in the following order:
    d.1) vCountMonth in set mode and set = 0 (zero)
    d.2) vCountMonth in refresh mode
    d.3) vMonth in refresh mode
    d.4) the interface
    d.5) vCountMonth in evaluate mode, evaluating "= 12"
    ==> if NO (KO, red line) link the KO line to d.2 step
    ==> a OK line is not necessary unless you have others steps after finish the evaluating
    Make any sense? That is a single loop to have the interface developed only one time.
    Please, remember to check each thread reply as Useful or Correct if they are useful to you...

  • Problem with inserting XML data server in ODI

    Hi,
    I was trying to insert an XML data server in ODI. I want to use it for my target database.i.e i want my target to be an xml file. So while specifying the url in the data server, what should i mention as the file name,dtd file , root etc? what i have done is dat i hav created the dtd file as per my requirement.i have created an empty xml file. while testing the connection an error comes : java.sql.SQLException: A parsing exception occurred saying Whitespace required..
    Next i tried putting jz d root tags in the xml file without any content. this returned the same error. next i tried inserting all d tags as per my dtd file. same error came...
    Please help.
    Regards,
    Divya
    Message was edited by:
    Divya Padmanabhan

    For empty xml try to use:
    <?xml version="1.0" encoding="UTF-8"?>
    <ROOT_SOME></ROOT_SOME>
    as jdbc connect string:
    jdbc:snps:xml?f=../demo/xml/1/file.xml&ro=false&ldoc=true&case_sens=true&s=LEO_FIZ&dod=true
    and try again...

  • Browsing and opening files from a file server

    Hello!
    I am interested if it is possible with AIR to browse files from a file server that is connected to a local network and open them. F.e. I want to create an application that opens template files with their native programs and save them on the file server under different name.
    Thank you in advance
    Lynda

    AIR 2 (now at labs.adobe.com) has features you are looking for. An AIR 2 application can open a file in the default system application registered for the file type. (The file server must be a mounted volume on the computer.)
    An extended desktop application (an AIR 2 application that is installed with a native installer) can communicate with another application. So, if the AIR (extended desktop) application knows the path to the native application, it can open that application and communicate with it. If the native application has APIs for opening and saving files, the AIR application can communicate with it using those APIs. (So, this functionality depends on the capabilities of the native application.)

Maybe you are looking for

  • Portal to Distributed Web App

    If you have the Portal on one server and a web app on another server (in our casse a struts web app) how do you display the web app in the portal? Sample code is welcome. thanks.

  • A Big Thanks.....

    ....to the UK Customer Support team. My daughter's iPhone 5c which we had bought ut of contract from O2, packed in a few weeks ago. After visiting an Apple Store, it was determined that there had been a hardware failure and I was advised to contact O

  • Photoshop elements 8 under windows 7 "open with" cannot link editor

    Hello, I recently upgraded my computer to 64bit, loaded my photographs and can open adobe photoshop elements 8 and editor and work on pictures.  However, I usually browse my pictures in windows explorer, open folders of interest and see small picture

  • JDBC Receiver Adapter and Oracle BLOB

    Hi, I would like to put some raw data into an ORACLE BLOB using XI. My idea was to put the data in an xml element, but the JDBC sender adapter doesn't seem to be able to cope with writing. Has anybody gathered experiences regarding this topic? Kind r

  • Coded UI Test now gets error about Newtonsoft.json not being found

    I have a simple coded ui test to log into an application. The test is data driven and uses data from TFS. We are using a TfsTestAgent user (that has admin privs) that is on both the server and the agent systems. When I execute the test, I see the fol