Mounting Repository Problem

Hi All,
          When i mount MDM server a new Pop up windows open. In this Pop up window it shows Error mounting Server and File failed a CRC check. While In status it shows Communication Error. If anybody knows how to correct it Please do needful as soon as possible.
Thanks in advance,
Mandeep Saini

Hi Mandeep,
I fully agree with Cleopantra. I have faced this problem many times. this comes when you try to mount the MDM server using MDM console of different version as of MDM server.
To Resolve this, you need to install the MDM console of the same version as of MDM server.
Please check the MDM server version and install the console of the same version and your problem will be automatically solved.
To check the version of MDM server, open any log file. You will se following entries at the top of the log file"
<b>
Host name: MDM Server Name
Process id:
Compile type: RELEASE_32
MDM Server version: 5.5.41.70 Built on 2007-Jun-19
Repository schema version: 1.264
</b>
the statement MDM server Version tells the version of MDM server.
Hope this will help you. Revert if you have any more queries.
Thanks,
<b>Shiv Prashant Dixit</b>

Similar Messages

  • Howto mount repository

    Dear KM's gurus,
    how to mount file system repository to root directory ("/") in CM? I have file system directory which should be integrated in portal. I have created file system repository with prefix "simple". I looked up to root directory of Km content but not found some dir like "simple". What is matter? How to mount repository to root dir?
    Thanks a lot.

    Hi Alexey,
    for this topic there are some good references available here in SDN:
    [https://wiki.sdn.sap.com/wiki/display/KMC/IntegratingWindowsDocumentstoKM]
    [https://wiki.sdn.sap.com/wiki/display/KMC/FileSystemRepository-W2KSecurityManager]
    [https://wiki.sdn.sap.com/wiki/display/KMC/CMRepositoryinFSDBMode]
    [http://help.sap.com/saphelp_nw04s/helpdata/en/ed/b334ea02a2704388d1d2fc3e4298ad/frameset.htm]
    Best regards,
    Denis

  • OVM 2.2 and NFS Repository Problems

    Hi All.
    I have recently started trying to upgrade our installation to 2.2
    but have run into a few problems, mostly relating to the different
    way that storage repositories are handled in comparison to 2.1.5.
    We use NFS here to provide shared storage to the pools.
    I wanted to setup a new two node server pool (with HA), so I upgraded
    one of the servers to from 2.1.5 to 2.2 to act as pool master. That
    worked ok and this server seems to be working fine in isolation:
    master# /opt/ovs-agent-2.3/utils/repos.py -l
    [ * ] 865a2e52-db29-48f1-98a0-98f985b3065c => augustus:/vol/OVS_pv_vpn
    master# df /OVS
    Filesystem 1K-blocks Used Available Use% Mounted on
    augustus:/vol/OVS_pv_vpn
                   47185920 16083008 31102912 35% /var/ovs/mount/865A2E52DB2948F198A098F985B3065C
    (I then successfully launched a VM on it.)
    The problem is when I try to add a second server to the pool. I did
    a fresh install of 2.2 and configured the storage repository to be the
    same as that used on the first node:
    vm1# /opt/ovs-agent-2.3/utils/repos.py --new augustus:/vol/OVS_pv_vpn
    vm1# /opt/ovs-agent-2.3/utils/repos.py -r 865a2e52-db29-48f1-98a0-98f985b3065c
    vm1# /opt/ovs-agent-2.3/utils/repos.py -l
    [ R ] 865a2e52-db29-48f1-98a0-98f985b3065c => augustus:/vol/OVS_pv_vpn
    When I try to add this server into the pool using the management GUI, I get
    this error:
    OVM-1011 Oracle VM Server 172.22.36.24 operation HA Check Prerequisite failed: failed:<Exception: ha_precheck_storage_mount failed:<Exception: /OVS must be mounted.> .
    Running "repos.py -i" yields:
    Cluster not available.
    Seems like a chicken and egg problem: I can't add the server to the pool without a
    mounted /OVS, but mounting /OVS is done by adding it to the pool? Or do I have that
    wrong?
    More generally, I'm a bit confused at how the repositories are
    supposed to be managed under 2.2.
    For exaple, the /etc/init.d/ovsrepositories script is still present,
    but is it still used? When I run it, it prints a couple of errors and
    doesn't seem to mount anything:
    vm1# service ovsrepositories start
    /etc/ovs/repositories does not exist
    Starting OVS Storage Repository Mounter...
    /etc/init.d/ovsrepositories: line 111: /etc/ovs/repositories: No such file or directory
    /etc/init.d/ovsrepositories: line 111: /etc/ovs/repositories: No such file or directory
    OVS Storage Repository Mounter Startup: [  OK  ]
    Should this service be turned off? It seem that ovs-agent now takes
    responsibility for mounting the repositories.
    As an aside, my Manager is still running 2.1.5 - is that part of the
    problem here? Is it safe to upgrade the manager to 2.2 while I still
    have a couple of pools running 2.1.5 servers?
    Thanks in adavance,
    Robert.

    rns wrote:
    Seems like a chicken and egg problem: I can't add the server to the pool without a
    mounted /OVS, but mounting /OVS is done by adding it to the pool? Or do I have that
    wrong?You have that wrong -- the /OVS mount point is created by ovs-agent while the server is added to the pool. You just need access to the shared storage.
    For exaple, the /etc/init.d/ovsrepositories script is still present,
    but is it still used?No, it is not. ovs-agent now handles the storage repositories.
    As an aside, my Manager is still running 2.1.5 - is that part of the
    problem here? Yes. You absolutely need to upgrade your Manager first to 2.2 before attempting to create/manage a 2.2-based pool. The 2.1.5 Manager doesn't know how to tell the ovs-agent how to create/join a pool properly. The upgrade process is detailed in [the ULN FAQ|https://linux.oracle.com/uln_faq.html#10].

  • File System Repository problem

    Hello guys,
    I'm on:
    J2EE Engine 6.40 PatchLevel 98256.313
    Portal 6.0.14.0.0
    KnowledgeManagementCollaboration 6.0.14.4.0 (NW04 SPS14 Patch 4)
    running on a Windows machine.
    I'm trying to set up a File System Repository to connect to a remote share on another Windows machine. I followed the documentation and also some hints here from SDN and still can't connect to the share.
    In component monitor it says:
    Startup Error:  The localroot does not exist:
    10.80.1.55\Test_portal
    In a trace log I'm getting this:
    #1.5#000BCD3F823F00260000000800000AB000040D879131A728#1140775814806#com.sapportals.wcm.service.fsmount.FSMountService#sap.com/irj#com.sapportals.wcm.service.fsmount.FSMountService#Guest#59####3a9c7e20a51d11da928e000bcd3f823f#Thread[ConfigurationEventDispatcher,5,SAPEngine_Application_Thread[impl:3]_Group]##0#0#Error##Plain###Error mounting drive ,path
    10.80.1.55
    Test_portal#
    #1.5#000BCD3F823F00260000000900000AB000040D879131ABAB#1140775814806#com.sapportals.wcm.service.fsmount.FSMountService#sap.com/irj#com.sapportals.wcm.service.fsmount.FSMountService#Guest#59####3a9c7e20a51d11da928e000bcd3f823f#Thread[ConfigurationEventDispatcher,5,SAPEngine_Application_Thread[impl:3]_Group]##0#0#Warning##Plain###FSMountService Reconfiguration (New path) failed to mount network path
    pr1141
    Test_portal - com.sapportals.wcm.service.fsmount.RemoteAccessException:  Network component not started:
    10.80.1.55
    Test_portal
         at com.sapportals.wcm.service.fsmount.FSMountService.mountDrive(FSMountService.java:90)
         at com.sapportals.wcm.service.fsmount.FSMountService.updateConfigurables(FSMountService.java:349)
         at com.sapportals.wcm.service.fsmount.FSMountService.reconfigure(FSMountService.java:287)
         at com.sapportals.wcm.crt.CrtThreadSafeComponentHandler.tryToReconfigure(CrtThreadSafeComponentHandler.java:297)
         at com.sapportals.wcm.crt.CrtThreadSafeComponentHandler.handleReconfigure(CrtThreadSafeComponentHandler.java:147)
         at com.sapportals.wcm.crt.CrtComponentManager.reconfigureComponent(CrtComponentManager.java:343)
         at com.sapportals.wcm.crt.CrtConfigurationEventListener.notifyComponentConfigChanged(CrtConfigurationEventListener.java:79)
         at com.sapportals.wcm.repository.runtime.CmConfigurationProvider.sendChangeEvent(CmConfigurationProvider.java:1288)
         at com.sapportals.wcm.repository.runtime.CmConfigurationProvider.sendChangeEventOrRestart(CmConfigurationProvider.java:1237)
         at com.sapportals.wcm.repository.runtime.CmConfigurationProvider.configEvent(CmConfigurationProvider.java:256)
         at com.sapportals.config.event.ConfigEventService.dispatchEvent(ConfigEventService.java:210)
         at com.sapportals.config.event.ConfigEventService.configEvent(ConfigEventService.java:98)
         at com.sapportals.config.event.ConfigEventDispatcher.callConfigListeners(ConfigEventDispatcher.java:299)
         at com.sapportals.config.event.ConfigEventDispatcher.flushEvents(ConfigEventDispatcher.java:248)
         at com.sapportals.config.event.ConfigEventDispatcher.run(ConfigEventDispatcher.java:105)
    I had an idea that it could be because of the Serverlet Engine User, which is set up due to documentation, but is only a local user not a domain user. I don't know to which extent this user is important for FS Mount Service. I thought, that for connecting to a remote share, there is only important the user and password set up in Network Path settings in CM.
    Changing the user on which is EP running is quite a difficult step due to our security policies and so on. So I would rather not experiment with this until I try all the other possibilities.
    Could you, please, give me some advice?

    Hi Jiri,
    have you used double forward slashes in your network path? Please try: //10.80.1.55/Test_portal
    You can also test from the portal machine that you can mount the drive via command shell:
    "net use t:
    10.80.1.55\Test_portal /user:DOMAIN\USER"
    (make sure you specify the user with domain)
    Hope this helps,
    Robert

  • ZFS mount point - problem

    Hi,
    We are using ZFS to take the sanpshots in our solaris 10 servers. I have the problem when using ZFS mount options.
    The solaris server we are used as,
    SunOS emch-mp89-sunfire 5.10 Generic_127127-11 sun4u sparc SUNW,Sun-Fire-V440Sys
    tem = SunOS
    Steps:
    1. I have created the zfs pool named as lmspool
    2. Then created the file system lmsfs
    3. Now I want to set the mountpoint for this ZFS file system (lmsfs) as "/opt/database" directory (which has some .sh files).
    4. Then need to take the snapshot of the lmsfs filesystem.
    5. For the mountpoint set, I tried two ways.
    1. zfs set mountpoint=/opt/database lmspool/lmsfs
    it returns the message "cannot mount '/opt/database/': directory is not empty
    property may be set but unable to remount filesystem".
    If I run the same command in second time, the mount point set properly and then I taken the snapshot of the ZFS filesystem (lmsfs). After done some modification in the database directory (delete some files), then I rollback the snapshot but the original database directory was not recovered. :-(
    2. In second way, I used the "legacy" option for mounting.
    # zfs set mountpoint=legacy lmspool/lmsfs
    # mount -F zfs lmspool/lmsfs /opt/database
    After run this command, I cant able to see the files of the database directory inside the /opt. So I cant able to modify anything inside the /opt/database directory.
    Please someone suggest me the solution for this problem. or anyother ways to take the ZFS snapshot with mounting point in UFS file system?..
    Thanks,
    Muthukrishnan G

    You'll have to explain the problem clearer. What exactly is the problem? What is "the original database directory"? The thing with the .sh files? Why are you trying to mount onto something with files in it in the first place?

  • Disk mounting/ejecting problems... ESPECIALLY FireWire

    Ever since upgrading to Leopard my 3 Lacie external drives have been VERY wonky.
    I've tried a FireWire hub, as well as daisy-chaining them via FW800 (they're the triple interface kind), but getting more than one to mount is always a crap shoot. I've tried USB, and that seems to work much better, although I have an older Mac that only has USB 1.0, so it's pretty much NOT an option.
    The other thing is when I do get any of the 3 mounted, actually EJECTING them is a challenge. Firstly, if it DOES actually eject, the finder window automatically closes. Which seems to be a bug (because it happens with ANY mountable disks, etc). However, most of the time I get a "cannot eject because this disk is in use" error, even when I am in NO WAY using the disk! No apps open!??
    Argh! I know Apple doesn't read these forums (although, as a fan and stock holder, I wish they did)... but there seems to be a LOT of FireWire issues that cropped up with Leopard.
    Any hints?
    e

    I've had the same problem, I use three LaCie triple interface disks, they used to work great but now they start unmounting on their own or getting stuck all of a sudden...so great when you're in the middle of rendering a sequence on Final Cut !
    Whenener I manage to run disk utility on them (which is not always the case), they seem OK.
    Is there really a problem with the handling of firewire by Leopard? Anyway to fix it? These jinxes really cause me problems for my work
    Thx for your help

  • OVM 3.0.1 local repository problem

    Good morning all, i am really new in OVM and i am facing a big issue that stops me evaluating this product.
    I have a couple of servers, connected to a S.A.N. array. I can see from both the servers i added to a clustered pool, and i am able to create a shared repository without problems.
    I am not able to see local disks in the OVM manager administration and therefore i can't create local repositories. I tried all i found in this forum, but without success.
    Let's focus on server1: it has a couple of 146GB disks. I used one of them for OVS installation leaving the second disk alone, without partitioning it.
    Tried to create local repository in the clustered pool, but no way...
    So i created a single full-disk partition and retried to create repo: still no way
    Then i created an ocfs2 filesystem in the new partition but, again, i couldnt see physical local server1 disk.
    Every time i changed partitions configuration, i obviously did rescanning of physical disks.
    I all my tests, local physical disks selection list in Generic Local Storage Array @ node1 is always empty.
    Any hint about solving this issue? Any good pointer to an hands-on guide (official docs are not so good)? Any suggestion about what to look at in log files for debugging?
    Any answer is welcome...
    Thank you all!

    I was able to do this as follows
    1. have an untouched unformatted disk (no partitions, no file system)
    2. in hardware under the vmserver name , scan for the disk and it should show in the list
    3. in the repository section of home, add the repository as physical disk
    4. "present" (green up and down arrows) the physical disk on the vmserver itself (dont ask me why you have to do this but if you dont it wont find its own disk)

  • NEED HELP Xsan volume is not mounted (strange problem)

    I ask the help in solving the volume mount problem. (advance I am sorry for my english)
    All began with the fact that the MDC is rebooted and on the two LUNs were gone XSAN label. I relabeled these LUNs using commands:
    cvlabel -c >label_list
    then in the file  label_list I corrected unknown disks on the label with a same names that were. Then I ran the cvlabel label_list command. Finally I got the correct label on all drives.
    # cvlabel -l
    /dev/rdisk14 [Raidix  meta_i          3365] acfs-EFI "META_I"Sectors: 3906830002. Sector Size: 512.  Maximum sectors: 3906975711.
    /dev/rdisk15 [Raidix  QSAN_I          3365] acfs-EFI "QSAN_I"Sectors: 7662714619. Sector Size: 4096.  Maximum sectors: 7662714619.
    /dev/rdisk16 [Raidix  meta_ii         3365] acfs-EFI "META_II"Sectors: 3906830002. Sector Size: 512.  Maximum sectors: 3906975711.
    /dev/rdisk17 [Raidix  2k_I            3365] acfs-EFI "2K_I"Sectors: 31255934943. Sector Size: 512.  Maximum sectors: 31255934943.
    /dev/rdisk18 [Raidix  2k_II           3365] acfs-EFI "2K_II"Sectors: 31255934943. Sector Size: 512.  Maximum sectors: 31255934943.
    /dev/rdisk19 [Raidix  QSAN_II         3365] acfs-EFI "QSAN_II"Sectors: 7662714619. Sector Size: 4096.  Maximum sectors: 7662714619.
    The volume [2K] starts successfully.
    but not mounted on the MDC and client.
    I ran a volume check:
    sh-3.2# cvfsck -wv 2K
    Checked Build disabled - default.
    BUILD INFO:
    #!@$ Revision 4.2.2 Build 7443 (480.8) Branch Head
    #!@$ Built for Darwin 12.0
    #!@$ Created on Mon Jul 29 17:01:44 PDT 2013
    Created directory /tmp/cvfsck3929a for temporary files.
    Attempting to acquire arbitration block... successful.
    Creating MetadataAndJournal allocation check file.
    Creating Video allocation check file.
    Creating Data allocation check file.
    Recovering Journal Log.
    Super Block information.
      FS Created On               : Wed Oct  2 23:59:20 2013
      Inode Version               : '2.7' - 4.0 big inodes + NamedStreams (0x207)
      File System Status          : Clean
      Allocated Inodes            : 4022272
      Free Inodes                 : 16815
      FL Blocks                   : 79
      Next Inode Chunk            : 0x51a67
      Metadump Seqno              : 0
      Restore Journal Seqno       : 0
      Windows Security Indx Inode : 0x5
      Windows Security Data Inode : 0x6
      Quota Database Inode        : 0x7
      ID Database Inode           : 0xa
      Client Write Opens Inode    : 0x8
    Stripe Group MetadataAndJournal             (  0) 0x746ebf0 blocks.
    Stripe Group Video                          (  1) 0x746ffb60 blocks.
    Stripe Group Data                           (  2) 0xe45dfb60 blocks.
    Inode block size is 1024
    Building Inode Index Database 4022272 (100%).       
       4022272 inodes found out of 4022272 expected.
    Verifying NT Security Descriptors
    Found 13 NT Security Descriptors: all are good
    Verifying Free List Extents.
    Scanning inodes 4022272 (100%).         
    Sorting extent list for MetadataAndJournal pass 1/1
    Updating bitmap for MetadataAndJournal extents 21815 (  0%).                   
    Sorting extent list for Video pass 1/1
    Updating bitmap for Video extents 3724510 ( 91%).                   
    Sorting extent list for Data pass 1/1
    Updating bitmap for Data extents 4057329 (100%).                   
    Checking for dead inodes 4022272 (100%).         
    Checking directories 11136 (100%).        
    Scanning for orphaned inodes 4022272 (100%).       
    Verifying link & subdir counts 4022272 (100%).         
    Checking free list. 4022272 (100%).       
    Checking pending free list.                       
    Checking Arbitration Control Block.
    Checking MetadataAndJournal allocation bit maps (100%).        
    Checking Video allocation bit maps (100%).        
    Checking Data allocation bit maps (100%).        
    File system '2K'. Blocks-5784860352 free-3674376793 Inodes-4022272 free-16815.
    File System Check completed successfully.
    check not helping 
    sh-3.2# cvadmin
    Xsan Administrator
    Enter command(s)
    For command help, enter "help" or "?".
    List FSS
    File System Services (* indicates service is in control of FS):
    1>*2K[0]                located on big.local:64844 (pid 5217)
    Select FSM "2K"
    Created           :          Wed Oct  2 23:59:20 2013
    Active Connections:          0
    Fs Block Size     :          16K
    Msg Buffer Size   :          4K
    Disk Devices      :          5
    Stripe Groups     :          3
    Fs Blocks         :          5784860352 (86.20 TB)
    Fs Blocks Free    :          3665561306 (54.62 TB) (63%)
    Xsanadmin (2K) > show
    Show stripe groups (File System "2K")
    Stripe Group 0 [MetadataAndJournal]  Status:Up,MetaData,Journal,Exclusive
      Total Blocks:122088432 (1.82 TB)  Reserved:0 (0.00 B) Free:121753961 (1.81 TB) (99%)
      MultiPath Method:Rotate
        Primary  Stripe [MetadataAndJournal]  Read:Enabled  Write:Enabled
    Stripe Group 1 [Video]  Status:Up
      Total Blocks:1953495904 (29.11 TB)  Reserved:270720 (4.13 GB) Free:129179 (1.97 GB) (0%)
      MultiPath Method:Rotate
        Primary  Stripe [Video]  Read:Enabled  Write:Enabled
    Stripe Group 2 [Data]  Status:Up
      Total Blocks:3831364448 (57.09 TB)  Reserved:270720 (4.13 GB) Free:3665432127 (54.62 TB) (95%)
      MultiPath Method:Rotate
        Primary  Stripe [Data]  Read:Enabled  Write:Enabled
    I checked the availability of LUNs on MDC and client, there also all right.
    But, unfortunately, the Volume is not mounted 
    xsanctl mount 2K
    mount command failed: Unable to mount volume `2K' (error code: 5)
    Please help me to figure out this situation, I will be grateful for any information.
    Thanks

    That looks like an I/O error. You may have an issue with one or more of the data LUNs. Check the system log for errors.

  • Repository problems

    I have xml docs stored in the purchaseOrder table which I transferred there using xdb repository folders
    via ftp.
    I truncate table purchaseOrder.
    Now I don't see any row in purchaseOrder. But I still see the repository showing the xml doc in
    windows explorer(though not the contents).
    (1)Whats is a repository. Is it only metadata and the contents are in all cases stored in Database tables.
    (2)I am not able to remove the repository paths/links via any method
    (sql error is given below, while i get similar error in win explorer and oracle en. manager).
    SQL> delete from resource_view where any_path='/home/SCOTT/purchaseOrders/1999';
    delete from resource_view where any_path='/home/SCOTT/purchaseOrders/1999'
    ERROR at line 1:
    ORA-31110: Action failed as resource /home/SCOTT/purchaseOrders/1999 is locked
    by name
    ORA-06512: at "XDB.XDB_RVTRIG_PKG", line 0
    ORA-06512: at "XDB.XDB_RV_TRIG", line 9
    ORA-04088: error during execution of trigger 'XDB.XDB_RV_TRIG'
    (3)When I try to remove a resource from the repository for which corresponding content
    exists in the database then I have no problem.

    A resource is the general term (from the IETF WebDAV spec) that defines an object in a file / folder heirarchy. WebDAV defines a set of verbs for creating, manipulating and deleting resources, a set of meta data that a Dav server will maintain about each resource, and a communication protocl based on HTTP and XML for exchanging information between a WebDAV client and server.
    In the case of XML DB for non-Schema based documents the content and the meta data are both stored in tables owned by the user XDB schema. For Schema based content the XDB tables maintain the meta data and the content is stored in the default tables defined / created when the schema was registered. The location of these tables is, by default in the relational schema of the user who registered the tables.
    Resources are automatically created when WebDav or FTP clients send requests to the XML DB protocol handlers. THey can also be created via functions on the dbms_xdb PL/SQL package.
    A Resource container is simple a 'folder' or 'directory'. Again these can be created via WebDAV or FTP. They an also be created using dbms_xdb.createFolder()...
    I am not sure about what you mean by "In the specify a file at the URL option what value do we give...". Can you be more specific.
    I anwered the question about partitioning by folder location in your other post on this topic..... It's a possible future direction...

  • [SOLVED] USB mount weird problem.

    So, I have this weird problem - udev automounting does not allow normal user to write to usb device.
    /etc/fstab:
    /dev/sdc1 /media/usbhd-sdc1 vfat users,exec,dev,suid 0 1
    /etc/rules.d/10-my-udev.rules:
    KERNEL!="sd[a-z][0-9]", GOTO="mnt_auto_mount_end"
    ACTION=="add", RUN+="/bin/mkdir /media/usbhd-%k", RUN+="/bin/mount /dev/%k"
    ACTION=="remove", RUN+="/bin/umount -l /media/usbhd-%k", RUN+="/bin/rmdir /media/usbhd-%k"
    LABEL="mnt_auto_mount_end"
    When I manually do:
    $ mount /dev/sdxy
    it works nicely, but when i do it with 'sudo' it's the same as udev. So I guess that's something with udev. Can any1 help me?
    Last edited by muchzill4 (2009-12-02 20:29:38)

    What filesystem is on your flash drive?  If it's FAT, there's no concept of ownership, so linux will just make the user running mount own everything, hence permissions problems when root mounts it.  I believe RUN+="/bin/mount -o uid=<your uid>,gid=<your gid> /dev/%k" or RUN+="/bin/mount -o umask=000 /dev/%k" will make it writeable by you and the world, respectively.  If the second solution is too insecure but you need multiple users to be able to access mounted flash drives, check out HAL.

  • Repository problem in Designer

    I've install 8i DB in ora8i home and form 6i in default_home and after wards I install Designer 6i in default_home now when I Start Designer it gives error msg that 'the user does not have an installed repository run repository Admin Utility but on trying the same unable to Login with'system/manager' login as 'scott/tiger' and try to install repository but gives error 'CDR-21244 Process has been aborted. Causes : the previous process has failed or the user has aborted it.
    Pls guide me how solve the repository re installetion. (also installed designer 2nd time but the problem continue)

    It is very important to read the Online Documentation of Designer/2000 ( 6i) before Installing a Repository. There are steps like System Requirements etc and step by step about Installing the Repository. There are steps like New Tablespace, User , Roles etc and once must read these steps before attempting to Install the Repository. I have done this and it worked.
    Hope this helps.
    Thiru

  • IDM 6.0SP1 & Oracle as a repository problems

    We use IDM 6.0SP1 + WebSphere 6 + Oracle 9 (+ oracle JDBC driver v10.x) as a repository with about 1 600 000 IDM accounts.
    Each IDM account has at most 2 resource accounts.
    We're facing two problems:
    1) IDM is used to read changes (user creations and modifications) in 10 different DB2 tables (thanks to 10 DB2 active sync adapters) and provision a single LDAP directory based on those changes. We've about 10 changes per second to consume. The Oracle repository is about 32 Gigs .
    We sometimes restart IDM but the adapters which are supposed to start automatically don't seem to start, or they start so slowly they look frozen.
    We suspect bad Oracle response times and also we suspect each adapter triggers a full (Oracle) database scan when starting, which may take a while, in spite we pass a statistics script every night on Oracle.
    We've already applied all the suggested/documented repository optimizations, so we wonder what else could we possibly do to improve IDM's interaction with its repository ? For example, is there any thing we can tune with the IDM RepositoryConfiguration XML object ?
    2) In case of "long" (> 5 minutes) repository or network outage (provided that IDM and its repository reside on different servers), we noticed IDM adapters don't restart well automatically while configured to or they look frozen. We have to manually "try" different things in order for the adapters to start.
    Most of commercial software relying on network or databases automatically deal with such outages so that they recover automatically, or at least, they just loose unsaved stuffs but they can restart anyway.
    Is there such a feature with IDM (6.0SP1 or later) ? If not, what's the recommanded actions to take in order for the adapters to start and process the remaining
    DB2 changes ?

    IDM does some cleanup in its database when it starts. Perhaps that is what is slowing you down.
    How many rows do you have in your "TASK" table?
    Edited by: PaulHilchey on Mar 6, 2008 6:15 PM

  • CM Repository problem

    Hello all,
    I have a CM repository set up with FSDB persistence mode. The files sit on a Windows server other than the portal server. The server that these files sit on had been shutdown over the weekend for some reason. When I came in this morning, the CM repository, of course, could not see the files. We have since brought up the server where the files sit, but KM still cannot see the files. In the KM monitoring log, I see the error:
    Startup Error:  folder '
    10.0.32.191\PRDRepositoryEP6\san\Files' does not exist
    What can I do to get the portal to see this again?
    Thanks!
    -Stephen Spalding
    Web Developer
    Graybar

    I solved the problem on my own...I was able to get the repository to come back without restarting the portals.
    What I did was create a new network path that had all of the same attributes as the network path used for the repository in question. I didn't have to tell the repository to use the new network path, the repository just started working after I created it. It's almost like the portal had to be reminded of where the repository's files were located.
    The reason why I remembered to try this is because I had encountered the same problem in our dev env last week and in the process of trying to create a new repository, I noticed that the old one started working after I created a new network path.
    Thanks to those who responded,
    respectfully,
    Stephen Spalding

  • Re: ActiveX and Repository Problems

    Hi Jason,
    Following is the Forte flag that I've used to redirect stdout of a
    partition:
    -fl c:\forte\log.txt(*:user1:1-63:255)
    The rule of thumb that I know in case of segmentation violation is:
    'Don't fight it, by-pass it!'
    Hope this help a little.
    Patrick Leveque
    Reply-To: "Jason Carpenter" <[email protected]>
    Our setup:
    Clients: NT 4, Forte 3.0.F.2
    Server: Unix, Forte 3.0.F.2
    Central repository is located on the server, we are using ATTACHED shadows.
    Our clients employ the use of an ActiveX control that is a spreadsheet-like
    grid. As far as we know, it is the only spreadsheet-like grid that is
    compatible with Forte. The ActiveX control is Protoview's Datatable v4.0.
    Some of the time it behaves well in Forte. Many times it does not.
    Situation:
    While in debug mode (or after having pressed the running man) when we
    attempt to interact with the grid, we get a Fatal Error on the task that
    created the ActiveX control. Unfortunately this Fatal Error completely shuts
    down our Forte clients. Usually there is not even time to look at stdout to
    determine what the problem or error is. We have started sending stdout to a
    separate log file, unfortunately the flags for logging are not documented
    well enough for us to pick the right ones to determine what is really
    happening.
    In times past we have seen error messages to stdout that inform us that
    there was a read/write error with the repository. Others mention
    serialization/deserialization problems. All have a System Segmentation
    violation error message. And we are always shut out of our client session.
    >
    We have noticed that sometimes when we integrate the problem will go away
    for an hour or two. Other times integration does not help.
    Question:
    Has anyone else experienced this type of problem with ActiveX controls?
    Does anyone have any suggestions to keep this problem from happening?
    Jason Carpenter
    CSC Consulting & Systems Integration
    [email protected]
    3811 Turtle Creek Blvd.
    20th Floor
    Dallas, TX 75219
    214.520.0555
    To unsubscribe, email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>
    >
    Get Your Private, Free Email at http://www.hotmail.com
    To unsubscribe, email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>

    Hi Jason,
    Following is the Forte flag that I've used to redirect stdout of a
    partition:
    -fl c:\forte\log.txt(*:user1:1-63:255)
    The rule of thumb that I know in case of segmentation violation is:
    'Don't fight it, by-pass it!'
    Hope this help a little.
    Patrick Leveque
    Reply-To: "Jason Carpenter" <[email protected]>
    Our setup:
    Clients: NT 4, Forte 3.0.F.2
    Server: Unix, Forte 3.0.F.2
    Central repository is located on the server, we are using ATTACHED shadows.
    Our clients employ the use of an ActiveX control that is a spreadsheet-like
    grid. As far as we know, it is the only spreadsheet-like grid that is
    compatible with Forte. The ActiveX control is Protoview's Datatable v4.0.
    Some of the time it behaves well in Forte. Many times it does not.
    Situation:
    While in debug mode (or after having pressed the running man) when we
    attempt to interact with the grid, we get a Fatal Error on the task that
    created the ActiveX control. Unfortunately this Fatal Error completely shuts
    down our Forte clients. Usually there is not even time to look at stdout to
    determine what the problem or error is. We have started sending stdout to a
    separate log file, unfortunately the flags for logging are not documented
    well enough for us to pick the right ones to determine what is really
    happening.
    In times past we have seen error messages to stdout that inform us that
    there was a read/write error with the repository. Others mention
    serialization/deserialization problems. All have a System Segmentation
    violation error message. And we are always shut out of our client session.
    >
    We have noticed that sometimes when we integrate the problem will go away
    for an hour or two. Other times integration does not help.
    Question:
    Has anyone else experienced this type of problem with ActiveX controls?
    Does anyone have any suggestions to keep this problem from happening?
    Jason Carpenter
    CSC Consulting & Systems Integration
    [email protected]
    3811 Turtle Creek Blvd.
    20th Floor
    Dallas, TX 75219
    214.520.0555
    To unsubscribe, email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>
    >
    Get Your Private, Free Email at http://www.hotmail.com
    To unsubscribe, email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>

  • Display Portlet Repository Problem - Page cannot be found

    hi,
    Just upgraded to 3.0.9.
    when i go to Portal Homepage ..... Adminster..... Display Portlet Repository, and then select an item, i get the following error msg on my browser:
    "The Page cannot be found"
    Kin
    null

    Hi
    The way I fix this problem was to Edit the portal provider and then click on Refresh Provider. The next time you go into Display Portlet Repository the item should be OK.
    The refresh remote provider information - seems not to work.
    Hope this helps
    Kin

Maybe you are looking for

  • GR/IR data migration

    Hi,    I want to know what are all difficulties or issues may rise while upload data (GR/GI,IR) in a new client (migrate data from other client).     In this case Mtl,Vendor,PO has already uploaded in the new client SAP.      As GR/IR has integrate l

  • Custom icon leaf tree

    Hi developers, I have one how to do question: In a tree I like to introduce a custom icon on each leaf. For this, I tried to introduce at the icon column sql #WORKSPACE_IMAGES#picture.gif but nothing happened, instead if I use #IMAGE_PREFIX#standard_

  • Not inserting date field-very urgent

    thanks in advance any help appreciated im getting all the datas in csv as an array and inserting using the insert method, the date(not datetime) field is not inserting (vb.net), and not showing any error im getting the datas as array string wat iv to

  • Sqlexception ,when trying to access stored procedure

              hi           i have a strange problem, iam using jsp code for accessing stored procedure in sql           server database, below is my java code which calls stored procedure,           <%                CallableStatement cstmt=null;        

  • System logs using up space

    I was checking my hard drive usage today and i noticed that the file named Logs is taking up about 8.4 gigs of my hard drive. In the folder there is a file named system.log that is taking up about 7.4 gigs. Why is this system.log taking up so much sp