Performance KM repository - Filesystem vs. WebDAV

Hi,
I'm a newbie to KM repositories so I hope this question makes sense.
I've got 2 Filesystem Repositories in my KM. One server is remote and reachable via a 4 MBit/s cable, the other one is located within the intranet.
When I browse both filesystem in EP, the response time of the intranet server filesystem is acceptable, whereas the response time of the remote filesystem is very slow.
So far not astonishing. But when I browse both filesystems via Windows Explorer both have nearly the same performance (also 4 MBit/s vs Intranet).
My question:
1) What could be the reason for that? Is it maybe a different protocol?
2) Would it maybe help if I try to embed the remote filesystem via WebDAV repository?
3) What exactly are the differences between WebDAV repository and filesystem Repository? Regarding performance and functioanlity!
4) Is it possible to use the filesystem via WebDAV and also make changes directly on the fileystem?
Thanks a lot for your help in advance.
Joschi

1) SMB is known not to perform well with bigger latencies (I think). Not sure why this doesn't show in Windows Explorer. Maybe heavy caching?
2) You can try that.
3) Performance may be better, because WebDAV protocol exchanges have been designed for this purpose.
4) Yes.
Best regards, Julian

Similar Messages

  • What is the recommended way of connecting to repository out of WebDAV, RMI, JNDI and JCA connector ?

    What is the recommended way of connecting to repository out of WebDAV, RMI, JNDI, and JCA connector possibilities provided by CQ 5.5?

    Hi dp_adusumalli,
    I recognized your list of ~8 questions you posted at around the same time, as I received that same list in our customer implementation from Arif A., from the India team, visiting San Jose. :-)
    I provided him feedback for most of the questions, so please check back with Arif for that info.
    For this particular question, can you provide specifics for the types of interactions you are interested in?
    Understanding the kinds of things you need to achieve will help determine which of the CQ/CRX interfaces is best suited for the task(s).
    I've collated a few points on this subject on this page:
    Manipulating the Adobe WEM/CQ JCR
    Regards,
    Paul

  • The following error occurred while performing a repository action. Error - [PCSF_10007] Cannot connect to repository [dev_repo] because \n[[REP_61082] AdminConsole's code page (UTF-8 encoding of Unicode) is not one-way compatible to repository dev_repo's

    Can some one help me out here ?

    Hello Experts, Am facing the below error in console page.Can someone help out here ? The following error occurred while performing a repository action. Error - [PCSF_10007] Cannot connect to repository [dev_repo] because \n[[REP_61082] AdminConsole's code page (UTF-8 encoding of Unicode) is not one-way compatible to repository dev_repo's code page (MS Windows Latin 1 (ANSI), superset of Latin1). Failed to connect to repository service [dev_repo].].  Below my environment variables in .bash_profile LANG=en_US.utf8; export LANGLC_ALL=en_US.utf8; export LC_ALLINFA_CODEPAGENAME=UTF-8; export INFA_CODEPAGENAME Thanks,Ahmed.

  • Poor performance in Repository mapping test tool (7.0.12)

    Hi,
    I'm just looking for some hints for performance tuning the Mapping tool in the Repository. This is our Development machine.
    Seems to be significantly worse since upgrade from sps 10 to sps 12.
    Every "display queue" / recompile of the map takes almost 5 minutes.
    The Java stack currently has about 4gb of heap, abap 1gb.  We reallocated a large amount from abap to java which temporarily alleviated the problem (about 60% quicker), but it's back after a few days.  There is currently no possibility of adding more memory.
    The problem seems to be noticable primarily with the IB, hence I'm looking for any performance tuning tips that would directly affect this.  I'm thinking that we need to concentrate on the Java stack.  I've looked at note 894509 and SAP NetWeaver Process Integration Tuning Guide and not seen anything directly related at first glance.
    I've deactivated end-to-end monitoring.
    The only thing that stands out is a significant amount of activity on the disk with the sap/oracledb/oracleexec on it.  It's about 9 times greater than the amount of swap.  Two processes seem to be regularly at the top of CPU usage dw.sapPID_DVEBMGS14 and /usr/sap/PID/DVEBMG.
    Hints and tips appreciated.  As I said, hardware changes are not possible at this time.
    Thanks
    James.
    Yes, points will be awarded where appropriate.

    Are you gacing this issue only when you  test the mapping in the IR or even in the runtime?
    I have also noticed that mapping test in the tool in IR always take a long time, but as they never reflect the runtime mapping perfromance I have always ignored it.
    A few reasons might be,
    1. Low RAM on the Machine being used by you.
    2. Too Many applications!
    Remember you are Connecting to the server from your IR using JWS and so the resources on your machine also matter quite a bit.
    If completely off track , ignore the reply.
    Regards
    Bhavesh

  • OWB performance with repository browser

    Hi,
    I just want to know... is there some way in repository browser or OWB to detect how much time that had been spent to execute one record in a query?
    For example, if i got 20 records executed in a single mapping... i want to know how much time the mapping need and the time performance of each records...
    Anyone has suggestion?
    Thanks in advance,
    Davis

    I also am not quite sure how you could do this in such a way to gaurantee accurate measurement.
    One possible approach is to append a timestamp column to the source table, and populate the field with systimestamp (for the sub-second granularity) in the mapping. Then, after the load, you could sort all your records by this column, find a chunk together and compare load times.
    But even this would be like swatting a mosquito with a bat, and may not even be fully accurate itself under certain loading scenarios (not the mention the fact that technically this would actually make the mapping run slower that actual since you've added a whole new column populated by a function call!)
    -J

  • Problem accessing filesystem with WebDAV

    Hi,
    I'm having some issues with WebDAV, I can connect and everything and the root folder shows up like it should in the Finder but when I try to access a folder inside the root directory it starts to load but never displays any content.
    The error log on the server gets flooded with:
    [Date] [error] [client IP] Provider encountered an error while streaming a multistatus PROPFIND response. [404, #0]
    Any ideas?

    What's the WebDAV client and the WebDAV server here? I'm going to guess that Mac OS X Server Snow Leopard Server is involved, but it could be either as a client or as a server.
    What else is in the logs and specifically adjacent to the error
    Provider encountered an error while streaming a multistatus PROPFIND response. 404, #0
    If you're serving WebDAV from Mac OS X Server Snow Leopard Server, what users are enabled for access into the WebDAV share?
    What are the protection settings on the files being shared by WebDAV? (The shell command +ls -al {directory path}+ will show the settings of the files.)
    Also turn on debug-level logging within Apache, and see if the logs then show anything interesting for a subsequent WebDAV access failure.

  • [SOLVED] gparted performance - creating ext4 filesystem takes forever

    17 hours have passed since i issued creation of new partition(150GB) with ext4 filesystem on it using newest version of gparted. 
    Does it really takes that long? Is it normal?
    Because i have no way of telling whether it does anything at the moment. All i see on  gui is the let's call it "progress bar" swinging back and forth.
    On the details i can see it's issuing mkfs.ext4 command but it's nowhere to be found using ps command.
    Only gparted and gpartedbin are running but they have 0 CPU usage according to top cmd.
    Last edited by deltharac (2013-10-16 19:01:01)

    NotFromBrooklyn wrote:Something must have failed. The largest partition I remember creating manually was 80 GB and it took me less than I need to finish a coffee cup.
    Yep it seemed strange to me that it's THAT slow i just needed someone's confirmation, thanks. I killed the whole process and the partition is there with ext4 filesystem prepared but i quess it won't be a good idea to assume it has been set up properly. There's already 2.54GB marked as used for some reason.
    I'll try to recreate filesystem from some livecd environment.
    graysky wrote:...are you resizing a partition and then making a new one?
    Nah no resizing, only one operation. I was just organizing my 2nd sata II hdd space to serve as backup for my original System from where i was using gparted to create new partitions on that disk.

  • Changing WebDAV repository name

    Hi All,
    We created a WebDAV repository with name WebDAV in the KM tab. We
    tried to change its name through Entry Links but its not possible. Have
    we to change anything in the associated IIS and/or in the WebDAV
    repository configuration?
    Best Regards,
    Ashwin

    Hi,
    System Administration --. System Configuraiton --> Knowledge Management --> Content Management --> Repository Managers --> WebDav Repository.
    Good Luck
    Eduardo

  • How To create a WebDAV repository...

    Is there any documentation available for creating a WebDAV repository and making it available for TREX to index and search. I tried to follow the help doc in creating such a repository but the WebDAV rep. I created does not show up in the KM repository listing.
    I'm running EP6 SP2 Patch 3.
    thanks for the help,
    Biju.

    Hi Paul,
    or use the KM Component monitor: System Administration -> Monitoring -> Knowledgemanagement -> Component Monitor
    Regards,
    Thilo

  • Problem with Integerating Microsoft Exchange server as Webdav Repository

    Hi all
    We are using MS Outlook 2000.
    I created an HTTP system with the exchange server domain name and the corresponding user id and password required for authentication.
    Then i created the Repository Manager using WebDav Repository template.But when I go to component monitoring it gives error in the server as server not connected.
    How to rectify the error?

    hi,
    Have you set the protocol to SMTP.
    You have to create a system alias for the exchange server. have you done with it?
    See this link
    http://help.sap.com/saphelp_nw04/helpdata/en/1d/3d59fdaa5ebb45967ea107d3fa117a/frameset.htm
    regards,
    Ganesh.N

  • Business Rules 'RUL-01216' Error when using WEBDAV

    Hi!
    I have successfully setup a WebDav directory. Inside the ruleAuthor, I connect to a repository file, selecting WebDav, inputted the url to the Webdav folder and user credentials, and successfully connected to the webdav directory.
    Now when its time to create a new dictionary, I get this rule error 'RUL-01216: Error saving dictionary myDictionary.
    stack trace:
    Cannot perform operation. 'RUL-01216: Error saving dictionary myDictionary. Please refer to the base exception. Root Cause: WebDAV error. '
    Hide
    oracle.rules.sdk.store.StoreException: WebDAV error. at oracle.rules.sdk.store.webdav.WebDAVStore.listDocuments(WebDAVStore.java:816) at oracle.rules.sdk.repository.impl.RuleRepositoryImpl._exists(RuleRepositoryImpl.java:162) at oracle.rules.sdk.repository.impl.RuleRepositoryImpl.saveAs(RuleRepositoryImpl.java:464) at oracle.rules.ra.repos.ReposManager.create(ReposManager.java:382) at oracle.rules.ra.uix.mvc.ReposEH.create(ReposEH.java:894) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at oracle.rules.ra.uix.mvc.BeanEH.genericHandleEvent(BeanEH.java:869) at oracle.rules.ra.uix.mvc.BeanEH.handleEvent(BeanEH.java:838) at oracle.cabo.servlet.event.TableEventHandler.handleEvent(Unknown Source) at oracle.cabo.servlet.event.TableEventHandler.handleEvent(Unknown Source) at oracle.cabo.servlet.event.BasePageFlowEngine.handleRequest(Unknown Source) at oracle.cabo.servlet.AbstractPageBroker.handleRequest(Unknown Source) at oracle.cabo.servlet.ui.BaseUIPageBroker.handleRequest(Unknown Source) at oracle.cabo.servlet.PageBrokerHandler.handleRequest(Unknown Source) at oracle.cabo.servlet.UIXServlet.doGet(Unknown Source) at javax.servlet.http.HttpServlet.service(HttpServlet.java:743) at javax.servlet.http.HttpServlet.service(HttpServlet.java:856) at ..

    Hi,
    I just faced the same problem you have. In my case it seemed to be associated to have extra content in the WebDav address (files). If you are having this problem try with an empty address.
    Hope this helps.
    Alex

  • Error while promoting  the repository to a global repository

    Hi,
    After restoring the repository, I try to promote the repository to a global repository in an Exclusive mode. When check the Global Repository check box and click OK. Pops up User name & password. I have used[i] admin user name and password. It gives me an error:
    The following error occurred while performing a repository action. Error - [PCSF_10007] Cannot connect to repository [PowerCenter] because: [[REP_57060] Login failure. The user admin is not valid for the repository PowerCenter Failed to connect to repository service [PowerCenter].].
    can anyone provide more information on this. Your guidance will be greatly appreciated.
    Thanks in advance,
    Ravi

    Hi Experts,
    I can connect to Informatica Repository with Administrator user, but the Integration Service is always down, how do I bring it up?
    Here is the log info:
    INFO Thu Feb 24 17:53:50 2011 6752 SF_34014 Service [Hyperion_Integration_Service] on node [PIFEL7_auhodifelp9] shut down.
    FATAL Thu Feb 24 17:53:50 2011 6752 SF_34004 Service initialization failed.
    ERROR Thu Feb 24 17:53:50 2011 6752 CMN_1006 Failed to connect to repository [Hyperion].
    ERROR Thu Feb 24 17:53:50 2011 6752 REP_12400 Repository Error ([REP_55102] Failed to connect to repository service [Hyperion].)
    ERROR Thu Feb 24 17:53:50 2011 6752 REP_12400 Repository Error (Failed to connect to repository service [Hyperion].)
    ERROR Thu Feb 24 17:53:50 2011 6752 REP_12014 An error occurred while accessing the repository
    ERROR Thu Feb 24 17:53:50 2011 6752 REP_12400 Repository Error (Failed to connect to repository service [Hyperion].)
    ERROR Thu Feb 24 17:53:50 2011 6752 REP_12400 Repository Error ([REP_57060] Login failure. The user HCTR is not valid for the repository Hyperion)
    ERROR Thu Feb 24 17:53:50 2011 6752 REP_12400 Repository Error (The user HCTR is not valid for the repository Hyperion)
    ERROR Thu Feb 24 17:53:50 2011 6752 REP_12014 An error occurred while accessing the repository
    INFO Thu Feb 24 17:53:50 2011 6752 CMN_1569 Server Mode: [ASCII]
    INFO Thu Feb 24 17:53:50 2011 6752 CMN_1570 Server Code page: [MS Windows Latin 1 (ANSI), superset of Latin1]
    INFO Thu Feb 24 17:53:50 2011 6752 SF_34012 Opened Service Port [10010] to listen for client connections.
    INFO Thu Feb 24 17:53:50 2011 6752 SF_34002 Service is initializing.
    Can someone please suggest me a solution for it?
    Thanks in advance.
    Krishan

  • Connection Override in Performance Management

    A universe can be imported into the Performance Management repository with only one connection.
    Since connection override is not an option, I am having to maintain multiple copies of the universes in PM repository, each having a different connection and create several metrics for each copy of the universe.
    This is getting increasingly difficult to administer as we keep adding more clients. Is there an easier workaround?

    The only way to achieve what you are looking for is to do what you are currently doing, which is to have separate universes for each connection that you are using. 
    Each universe itself can only have 1 connection. 
    I still don't really understand how having multiple connections to the same universe is going to help with duplication of data.

  • Problem with the Repository service

    Hi friends,
    In my info admin console web page among the repository service and the integration service, i can see the health of the two services as up.
    But when i clicked my repository service individually it showed like
    The powercenter repository service is available
    But it also showed the below error along with the above line like
    the following error occurred while performing a repository action. Error - [pcsf_1007] cannot connect to repository[oracle_BI_DW_Base] because \n[[rep_57071] unable to connect to the repository database. Please check repository agent configuration. Failed to connect to repository service [Oracle_BI_DW_Base].]
    But my integration service is up and fine.
    Since due to the above error, i couldnt connect with any of my info clients(as repository seems to be down from the above error message)
    But my informatica service is up.
    Im not sure with this problem, but i remember that my DBA's has overwritten with some snapshots on the DB(that im using with BIAPPS) whether that could be the problem
    Since im aware that snapshots also a recent one and it does not have any chances of destroying(my BIAPPS metadata)
    Couldnt get with the problem clearly. Kindly help me with this.
    Thanks in advance.
    Regards,
    Saro

    Svee,
    Yes you are right, i started this informatica service a long before(but dint stopped it), now when i tried to restart the informatica services, it is starting suddenly and all of a sudden after 5 seconds, it is going down.
    Again i restarted it in services.msc and after that if i refresh(@ the top of services.msc) it means then the informatica service is going down again.
    This problem started again. what could be the fix for this svee.
    Regards,
    Saro

  • MDM - Import Repository schema

    Hi all,
    Have any one tried "Import Repository Schema" operation in MDM Console ?
    I want know the purpose and how this works.
    Please provide me the materails if you have any.
    Thanks in advance.

    Hi ArunParbhu,
    We perform "Import Repository schema"  in order to import i.e. define schema of the repository. We can import repository schema using an MDM schema file.For this operation ,the repository must be unloaded."Import repository schema" operation will create new tables and fields in the repository.
    U can get more details about how to import repository schema from console reference guide available at service market place.It is very well explained after page 210  onwards.
    Hope it will help u.
    Thanks,
    <b>
    Shiv Prashant Dixit</b>

Maybe you are looking for

  • I have a 13" mac book pro with a 250 gb solid state drive and I would like to put in a larger one is this possible

    I own a Mac book Pro 13" it has a 250 gb solid state drive and I would like to put a larger one in is this possible.

  • Hyperlink showing blue underline with white type

    I have a website that has two hyperlinks on the home page of my site. The one links to a webpage the other to a downloadable pdf. Both have white type and are supposed to have a white underline. The link that links to the webpage is correct but the o

  • Calculation in different groups

    Hi experts, I have this problem: I need to calculate measures based on account aggregation, in a crosstab is easy , but I don't know how calculate it in different groups. In different Group header I have different account level. In attached image you

  • O No: not that bookstore example again!

    I've been reading through the forum to find an answer for my problem but using the suggested solutions doesn't seem to work. I have deployed, redeployed, created, destroyed and re-created, all to no avail. I consistently get a message telling me that

  • Reinstall older Bridge/ACR

    The current Bridge / ACR updates now insist on rebuilding all thumbs and previews.  For me this is awkward (750,000 images on archives). How can I reinstall an older version of Bridge and ACR so that my archives will not be rebuilt ?? Is it necessary