Storage category not defaulted during DMS doc. creation

Hi Experts
We are manually creating DMS Document info record with transaction CV03N and we are attaching some business document to it. As the "Storage category" is not defaulted with any value, the document attached is stored in the local PC memory instead of the content server.
It seems the configurations are in place.
1) Profile key created
2) Profile key assigned to role
3) Role assigned to User
4) Storage Cat. Assigned for the profile key and application key
Can you please let me know if I am missing anything?
Regards
Gobinathan G

Dear Gobinathan Govindarajan ,
Follow the steps as mentioned.
1. Create a storage category. I think you must haave alrady created for Exp  say "ZHRW".
2. Go to "Define Profile"
3. Define profile say "ZREJP"  give description "TEST"
4. Select profile "ZREJP" and double click "Assign groups/user to the profiles"
5. Enter Profile "ZREJP" and role or User ID to which the profile key is to be ssigned. Assign Profile key to only ROle or to user only.
6.Go Back.
7. Select profile "ZREJP" and double click "Determine definitions for applications".
8. Enter Profile "ZREJP" and against it enter " Worstation application say"xls" and storage category "ZHRW". and tick all check boxes.
9. Carry out step 8 for application you will be using.
10. Save and then try for the User whether the original is getting checked in default storage category "ZHRW"
11. If you want to ensure checkin should happen when DIR is saved go to DC10 and in Doc type status configuration for each status you have defined click the check box "Checkin required. so when user save DIR without checkin of files , the system will checkin the file in default storage category automaticall.
Hope it works for you.
With Warm Regards

Similar Messages

  • "Storage is not configured" during failover - COH-1467 still not fixed?

    I am runing a test program using two cache nodes and one "client JVM", all on the same machine (the first cache node is used as WKA). When I kill one of the cache nodes and the restart it again I get the following exceptions:
    In the surviving cache node:
    2009-01-22 08:01:14.753/112.718 Oracle Coherence GE 3.4.1/407 <Error> (thread=DistributedCache, member=1): An exception (java.lang.ClassCastException) occurred reading Message AggregateFilterRequest Type=31 for Service=DistributedCache{Name=DistributedCache, State=(SERVICE_STARTED), LocalStorage=enabled, PartitionCount=257, BackupCount=1, AssignedPartitions=257, BackupPartitions=0}
    2009-01-22 08:01:14.753/112.718 Oracle Coherence GE 3.4.1/407 <Error> (thread=DistributedCache, member=1): Terminating DistributedCache due to unhandled exception: java.lang.ClassCastException
    2009-01-22 08:01:14.753/112.718 Oracle Coherence GE 3.4.1/407 <Error> (thread=DistributedCache, member=1):
    java.lang.ClassCastException: java.lang.Long
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$AggregateFilterRequest.read(DistributedCache.CDB:8)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:117)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache.onNotify(DistributedCache.CDB:3)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:37)
         at java.lang.Thread.run(Thread.java:619)
    On the restarted cache node:
    2009-01-22 08:01:15.722/2.220 Oracle Coherence GE 3.4.1/407 <Info> (thread=Main Thread, member=n/a): Loaded cache configuration from resource "file:/C:/Javaproj/Query/lib/master-coherence-cache-config.xml"
    2009-01-22 08:01:16.565/3.063 Oracle Coherence GE 3.4.1/407 <D5> (thread=Cluster, member=n/a): Service Cluster joined the cluster with senior service member n/a
    2009-01-22 08:01:16.628/3.126 Oracle Coherence GE 3.4.1/407 <Info> (thread=Cluster, member=n/a): Failed to satisfy the variance: allowed=16, actual=31
    2009-01-22 08:01:16.628/3.126 Oracle Coherence GE 3.4.1/407 <Info> (thread=Cluster, member=n/a): Increasing allowable variance to 17
    2009-01-22 08:01:17.003/3.501 Oracle Coherence GE 3.4.1/407 <Info> (thread=Cluster, member=n/a): This Member(Id=5, Timestamp=2009-01-22 08:01:16.768, Address=138.106.109.121:54101, MachineId=36217, Location=site:global.scd.scania.com,machine:N27858,process:22728, Edition=Grid Edition, Mode=Development, CpuCount=2, SocketCount=1) joined cluster with senior Member(Id=1, Timestamp=2009-01-22 07:59:24.098, Address=138.106.109.121:54100, MachineId=36217, Location=site:global.scd.scania.com,machine:N27858,process:22948, Edition=Grid Edition, Mode=Development, CpuCount=2, SocketCount=1)
    2009-01-22 08:01:17.065/3.563 Oracle Coherence GE 3.4.1/407 <D5> (thread=Cluster, member=n/a): Member(Id=4, Timestamp=2009-01-22 08:00:35.566, Address=138.106.109.121:8088, MachineId=36217, Location=site:global.scd.scania.com,machine:N27858,process:11544) joined Cluster with senior member 1
    2009-01-22 08:01:17.081/3.579 Oracle Coherence GE 3.4.1/407 <D5> (thread=Cluster, member=n/a): Member 1 joined Service Management with senior member 1
    2009-01-22 08:01:17.081/3.579 Oracle Coherence GE 3.4.1/407 <D5> (thread=Cluster, member=n/a): Member 1 joined Service InvocationService with senior member 1
    2009-01-22 08:01:17.081/3.579 Oracle Coherence GE 3.4.1/407 <D5> (thread=Cluster, member=n/a): Member 1 joined Service DistributedCacheNoBackup with senior member 1
    2009-01-22 08:01:17.097/3.595 Oracle Coherence GE 3.4.1/407 <D5> (thread=Cluster, member=n/a): Member 4 joined Service InvocationService with senior member 1
    2009-01-22 08:01:17.097/3.595 Oracle Coherence GE 3.4.1/407 <D5> (thread=Cluster, member=n/a): Member 4 joined Service DistributedCache with senior member 4
    2009-01-22 08:01:17.222/3.720 Oracle Coherence GE 3.4.1/407 <D5> (thread=Cluster, member=5): TcpRing: connecting to member 4 using TcpSocket{State=STATE_OPEN, Socket=Socket[addr=/138.106.109.121,port=8088,localport=3609]}
    2009-01-22 08:01:17.253/3.751 Oracle Coherence GE 3.4.1/407 <D5> (thread=Invocation:Management, member=5): Service Management joined the cluster with senior service member 1
    2009-01-22 08:01:17.393/3.891 Oracle Coherence GE 3.4.1/407 <D5> (thread=Invocation:InvocationService, member=5): Service InvocationService joined the cluster with senior service member 1
    2009-01-22 08:01:18.643/5.141 Oracle Coherence GE 3.4.1/407 <D5> (thread=Cluster, member=5): Member 1 joined Service DistributedCache with senior member 4
    2009-01-22 08:01:19.050/5.548 Oracle Coherence GE 3.4.1/407 <D5> (thread=Cluster, member=5): Member 1 left service DistributedCache with senior member 4
    2009-01-22 08:01:23.659/10.157 Oracle Coherence GE 3.4.1/407 <D5> (thread=Cluster, member=5): Member 1 joined Service DistributedCache with senior member 4
    2009-01-22 08:01:24.284/10.782 Oracle Coherence GE 3.4.1/407 <D5> (thread=Cluster, member=5): Member 1 left service DistributedCache with senior member 4
    2009-01-22 08:01:28.674/15.172 Oracle Coherence GE 3.4.1/407 <D5> (thread=Cluster, member=5): Member 1 joined Service DistributedCache with senior member 4
    2009-01-22 08:01:29.503/16.001 Oracle Coherence GE 3.4.1/407 <D5> (thread=Cluster, member=5): Member 1 left service DistributedCache with senior member 4
    2009-01-22 08:01:33.674/20.172 Oracle Coherence GE 3.4.1/407 <D5> (thread=Cluster, member=5): Member 1 joined Service DistributedCache with senior member 4
    2009-01-22 08:01:33.721/20.219 Oracle Coherence GE 3.4.1/407 <D5> (thread=Cluster, member=5): Member 1 left service DistributedCache with senior member 4
    2009-01-22 08:01:38.674/25.172 Oracle Coherence GE 3.4.1/407 <D5> (thread=Cluster, member=5): Member 1 joined Service DistributedCache with senior member 4
    2009-01-22 08:01:38.956/25.454 Oracle Coherence GE 3.4.1/407 <D5> (thread=Cluster, member=5): Member 1 left service DistributedCache with senior member 4
    2009-01-22 08:01:43.690/30.188 Oracle Coherence GE 3.4.1/407 <D5> (thread=Cluster, member=5): Member 1 joined Service DistributedCache with senior member 4
    2009-01-22 08:01:44.174/30.672 Oracle Coherence GE 3.4.1/407 <D5> (thread=Cluster, member=5): Member 1 left service DistributedCache with senior member 4
    2009-01-22 08:01:47.659/34.157 Oracle Coherence GE 3.4.1/407 <Error> (thread=Main Thread, member=5): Error while starting service "InvocationService": com.tangosol.net.RequestTimeoutException: Timeout during service start: ServiceInfo(Id=2, Name=InvocationService, Type=Invocation
    MemberSet=ServiceMemberSet(
    OldestMember=Member(Id=1, Timestamp=2009-01-22 07:59:24.098, Address=138.106.109.121:54100, MachineId=36217, Location=site:global.scd.scania.com,machine:N27858,process:22948)
    ActualMemberSet=MemberSet(Size=3, BitSetCount=2
    Member(Id=1, Timestamp=2009-01-22 07:59:24.098, Address=138.106.109.121:54100, MachineId=36217, Location=site:global.scd.scania.com,machine:N27858,process:22948)
    Member(Id=4, Timestamp=2009-01-22 08:00:35.566, Address=138.106.109.121:8088, MachineId=36217, Location=site:global.scd.scania.com,machine:N27858,process:11544)
    Member(Id=5, Timestamp=2009-01-22 08:01:16.768, Address=138.106.109.121:54101, MachineId=36217, Location=site:global.scd.scania.com,machine:N27858,process:22728)
    MemberId/ServiceVersion/ServiceJoined/ServiceLeaving
    1/3.1/Thu Jan 22 07:59:27 CET 2009/false,
    4/3.1/Thu Jan 22 08:00:36 CET 2009/false,
    5/3.1/Thu Jan 22 08:01:17 CET 2009/false
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onStartupTimeout(Grid.CDB:6)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.start(Service.CDB:27)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.start(Grid.CDB:38)
         at com.tangosol.coherence.component.util.SafeService.startService(SafeService.CDB:28)
         at com.tangosol.coherence.component.util.SafeService.ensureRunningService(SafeService.CDB:27)
         at com.tangosol.coherence.component.util.SafeService.start(SafeService.CDB:14)
         at com.tangosol.net.DefaultConfigurableCacheFactory.ensureService(DefaultConfigurableCacheFactory.java:841)
         at com.tangosol.net.DefaultCacheServer.start(DefaultCacheServer.java:140)
         at com.tangosol.net.DefaultCacheServer.main(DefaultCacheServer.java:61)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:597)
         at com.intellij.rt.execution.application.AppMain.main(AppMain.java:90)
    Exception in thread "Main Thread" com.tangosol.net.RequestTimeoutException: Timeout during service start: ServiceInfo(Id=2, Name=InvocationService, Type=Invocation
    MemberSet=ServiceMemberSet(
    OldestMember=Member(Id=1, Timestamp=2009-01-22 07:59:24.098, Address=138.106.109.121:54100, MachineId=36217, Location=site:global.scd.scania.com,machine:N27858,process:22948)
    ActualMemberSet=MemberSet(Size=3, BitSetCount=2
    Member(Id=1, Timestamp=2009-01-22 07:59:24.098, Address=138.106.109.121:54100, MachineId=36217, Location=site:global.scd.scania.com,machine:N27858,process:22948)
    Member(Id=4, Timestamp=2009-01-22 08:00:35.566, Address=138.106.109.121:8088, MachineId=36217, Location=site:global.scd.scania.com,machine:N27858,process:11544)
    Member(Id=5, Timestamp=2009-01-22 08:01:16.768, Address=138.106.109.121:54101, MachineId=36217, Location=site:global.scd.scania.com,machine:N27858,process:22728)
    MemberId/ServiceVersion/ServiceJoined/ServiceLeaving
    1/3.1/Thu Jan 22 07:59:27 CET 2009/false,
    4/3.1/Thu Jan 22 08:00:36 CET 2009/false,
    5/3.1/Thu Jan 22 08:01:17 CET 2009/false
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onStartupTimeout(Grid.CDB:6)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.start(Service.CDB:27)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.start(Grid.CDB:38)
         at com.tangosol.coherence.component.util.SafeService.startService(SafeService.CDB:28)
         at com.tangosol.coherence.component.util.SafeService.ensureRunningService(SafeService.CDB:27)
         at com.tangosol.coherence.component.util.SafeService.start(SafeService.CDB:14)
         at com.tangosol.net.DefaultConfigurableCacheFactory.ensureService(DefaultConfigurableCacheFactory.java:841)
         at com.tangosol.net.DefaultCacheServer.start(DefaultCacheServer.java:140)
         at com.tangosol.net.DefaultCacheServer.main(DefaultCacheServer.java:61)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:597)
         at com.intellij.rt.execution.application.AppMain.main(AppMain.java:90)
    2009-01-22 08:01:47.659/34.157 Oracle Coherence GE 3.4.1/407 <Error> (thread=Invocation:InvocationService, member=5): validatePolls: This service timed-out due to unanswered handshake request. Manual intervention is required to stop the members that have not responded to this Poll
    PollId=1, active
    InitTimeMillis=1232607677393
    Service=InvocationService (2)
    RespondedMemberSet=[1]
    LeftMemberSet=[]
    RemainingMemberSet=[4]
    2009-01-22 08:01:47.659/34.157 Oracle Coherence GE 3.4.1/407 <D5> (thread=Invocation:InvocationService, member=5): Service InvocationService left the cluster
    2009-01-22 08:01:47.659/34.157 Oracle Coherence GE 3.4.1/407 <D4> (thread=ShutdownHook, member=5): ShutdownHook: stopping cluster node
    Process finished with exit code 1
    On the client JVM:
    2009-01-22 08:01:14.815/41.265 Oracle Coherence GE 3.4.1/407 <D5> (thread=Invocation:InvocationService, member=4): Repeating AggregateFilterRequest due to the re-distribution of PartitionSet[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127]
    java.lang.RuntimeException: Storage is not configured
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.onMissingStorage(DistributedCache.CDB:9)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.ensureRequestTarget(DistributedCache.CDB:33)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.sendPartitionedRequest(DistributedCache.CDB:31)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.aggregate(DistributedCache.CDB:11)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ViewMap.aggregate(DistributedCache.CDB:52)
         at com.tangosol.coherence.component.util.SafeNamedCache.aggregate(SafeNamedCache.CDB:1)
         at com.tangosol.net.cache.NearCache.aggregate(NearCache.java:453)
         at com.scania.oas.coherence.invocables.ValueQueryInvocable.typeSearch(ValueQueryInvocable.java:260)
         at com.scania.oas.coherence.invocables.ValueQueryInvocable.queryStringFirstSearch(ValueQueryInvocable.java:300)
         at com.scania.oas.coherence.invocables.ValueQueryInvocable.run(ValueQueryInvocable.java:135)
         at com.scania.oas.coherence.invocables.InvocableWrapper.run(InvocableWrapper.java:54)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.InvocationService.onInvocationRequest(InvocationService.CDB:10)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.InvocationService$InvocationRequest.onReceived(InvocationService.CDB:40)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:9)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:130)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:37)
         at java.lang.Thread.run(Thread.java:619)
    java.lang.RuntimeException: Storage is not configured
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.onMissingStorage(DistributedCache.CDB:9)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.ensureRequestTarget(DistributedCache.CDB:33)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.sendPartitionedRequest(DistributedCache.CDB:31)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$BinaryMap.aggregate(DistributedCache.CDB:11)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.DistributedCache$ViewMap.aggregate(DistributedCache.CDB:52)
         at com.tangosol.coherence.component.util.SafeNamedCache.aggregate(SafeNamedCache.CDB:1)
         at com.tangosol.net.cache.NearCache.aggregate(NearCache.java:453)
         at com.scania.oas.coherence.invocables.ValueQueryInvocable.queryStringSearch(ValueQueryInvocable.java:268)
         at com.scania.oas.coherence.invocables.ValueQueryInvocable.queryStringFirstSearch(ValueQueryInvocable.java:297)
         at com.scania.oas.coherence.invocables.ValueQueryInvocable.run(ValueQueryInvocable.java:135)
         at com.scania.oas.coherence.invocables.InvocableWrapper.run(InvocableWrapper.java:54)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.InvocationService.onInvocationRequest(InvocationService.CDB:10)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.InvocationService$InvocationRequest.onReceived(InvocationService.CDB:40)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:9)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:130)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:37)
         at java.lang.Thread.run(Thread.java:619)
    I even tried to re-write the "run method" in tthe invocable in a way that caused it to, in a loop, perform a delay and then re-try its calculations when it received a runtime exception with the text "Storage is not configured" causing the retreival a new named cache each time but this did not help - it never seemed to recover...
    Since I dont see any of my application classes in the "class cast" trace-back I assume it is an Coherence internal problem or can you thiink about some user programming error that could cause it? I am by the way not using any long or "Long in my application...
    Best Regards
    Magnus

    Hi Magnus,
    The log you provided seems to indicate that the problem was caused by the de-serialization of the “AggregateFilterRequest” message. The only explanation we have is that you are using a custom Filter that has asymmetrical serialization/deserialization routines, causing this failure. Could you please send us the corresponding client code?
    Meanwhile, we will open a JIRA issue, to make sure that Coherence handles this kind of error more gracefully.
    -David

  • Partners not defaulting in UB doc type STO

    Hi Gurus,
    Can anybody please explain why the partners do not default in the partners tab of the PO with doc type UB when i enter the supplying plant. But the partners default in NB doc type when i enter the vendor. Why is this so?
    Thanks
    Anusha

    sto with UB type is meant for stock transfer between plant to plant of the same company code.
    There is no vendor this document. This is company's internal functionality.
    so there is no question of partners and no required also.
    where as sto with NB document type is for plant to plant stock transfer of different company code.
    so it is treated as purchase order sent to vendor.

  • OVM 3.3.1:  NFS storage is not available during repository creation

    Hi, I have OVM Manager  running on a separate machines managing 3 servers running OVM server  in a server pool. One of the server also exports a NFS share that all other machines are able to mount and read/write to. I want to use this NFS share to create a OVM repository but so far unable to get it to work.
    From this first screen shot we can see that the NFS file system was successfully added under storage tab and refreshed.
    https://www.dropbox.com/s/fyscj2oynud542k/Screenshot%202014-10-11%2013.40.00.png?dl=0
    But its is not available when adding a repository as shown below. What can I did to make it show up here.
    https://www.dropbox.com/s/id1eey08cdbajsg/Screenshot%202014-10-11%2013.40.19.png?dl=0
    No luck with CLI either.  Any thoughts?
    OVM> create repository name=myrepo fileSystem="share:/" sharepath=myrepo - Configurable attribute by this name can't be found.
    == NFS file system refreshed via CLI === 
    OVM> refresh fileServer name=share
    Command: refresh fileServer name=share
    Status: Success
    Time: 2014-10-11 13:28:14,811 PDT
    JobId: 1413059293069
    == file system info
    OVM> show fileServer name=share
    Command: show fileServer name=share
    Status: Success
    Time: 2014-10-11 13:28:28,770 PDT
    Data:
      FileSystem 1 = ff5d21be-906d-4388-98a2-08cb9ac59b43  [share]
      FileServer Type = Network
      Storage Plug-in = oracle.generic.NFSPlugin.GenericNFSPlugin (1.1.0)  [Oracle Generic Network File System]
      Access Host = 1.2.3.4
      Admin Server 1 = 44:45:4c:4c:46:00:10:31:80:51:c6:c0:4f:35:48:31  [dev1]
      Refresh Server 1 = 44:45:4c:4c:46:00:10:31:80:51:c6:c0:4f:35:48:31  [dev1]
      Refresh Server 2 = 44:45:4c:4c:47:00:10:31:80:51:b8:c0:4f:35:48:31  [dev2]
      Refresh Server 3 = 44:45:4c:4c:33:00:10:34:80:38:c4:c0:4f:53:4b:31  [dev3]
      UniformExports = Yes
      Id = 0004fb0000090000fb2cf8ac1968505e  [share]
      Name = share
      Description = NFS exported /dev/sda1 (427GB) on dev1
      Locked = false
    == version details ==
    OVM server:3.3.1-1065
    Agent Version:3.3.1-276.el6.7Kernel Release:3.8.13-26.4.2.el6uek.x86_64
    Oracle VM Manager
    Version: 3.3.1.1065
    Build: 20140619_1065

    Actually, OVM, as is with all virtualization servers, is usually only the head on a comprehensive infrastructure. OVM seems  quite easy from the start, but I'd suggest, that you at least skim trough the admin manual, to get some understanding of the conecpts behind it.  OVS thus usually only provides the CPU horse power, but not the storage, unless you only want to setup a single-server setup. If you plan on having a real multi-server setup, then you will need shared storage.
    The shared storage for the server pool, as well as the storage repository can be served from the same NFS server without issues. If you want to have a little testbed, then NFS is for you. It lacks some features that OCFS2 benefits from, like thin provisioning, reflinks and sparse files.
    If you want to remove the NFS storage, then you'll need to remove any remainders of any OVM object, like storage repositories or server pool filesystems. Unpresent and storage repo and delete it afterwards… Also, I hope that you didn't  create the NFS export directly on the root of the drive, since OVM wants to remove any file on the NFS export and on any root of ony volume there's the lost-found folder, which OVM, naturally, can't remove. Getting rid of such a storage repo can be a bit daunting…
    Cheers,
    budy

  • Output determination does not happen during Sales Document creation

    Hello All,
    I am facing an issue where the automatic output determination does not happen when the Sales Order is created. Let me explain a bit on the background.
    Output type ZDL2 has been defined as a special function (TNAPR-NACHA = 8) attached to a custom program and Form routine to create Outbound delivery after successful posting of the sales order.
    Condition records are maintained to trigger this output for the Key Combination of Sales Org/Distrib Channel/Division/Route. Access sequence has also been correctly maintained for the above fields.
    When the Sales order is saved, the output is supposed to be triggered but out of several sales orders created, it triggered for just one document. We have no clues on what could be the issue.
    One question here is, can we have the access sequence maintained for the fields from both header and item level (ie KOMKBV1 and KOMPBV1). Please be reminded that, in our case, the field ROUTE comes from the field catalog KOMPBV1.
    Another point which we noticed is that, the Function Module COMMUNICATION_AREA_KOMPBV1 which is supposed to fill the Item data Communication structure for output determination was never called.
    Has anyone came across such situations. Comments or suggestions are really appreciated.
    Kind Regards
    Sabu Kuriakose

    Sabu Kuriakose,
    Since you are using ROUTE as one parameter for the condition table, and it is at the item level.
    the output is triggered at the header level.
    So a sales order can contain multiple items with different routes. so at header level how will u identify the route ?
    the fields might be available for creatign the condition table but the problem is the fields should get the value during order processing.
    in the analsysis screen you will know that the field is getting the value of the ROUTE  or not ?
    also for creating a delivery you need a shipping point. same as above you can hae multiple items with different shipping points. how will u consider that scenario.
    If you can explain that, then that will be helpful to answer your problem.

  • Add Approvers is not saving during Shopping basket creation

    Hi SRM Gurus,
    We have Implemented N step Badi for our client .During Post Go Live we are facing this issue
    While creating shopping basket for the first time if we add the approver and save the shopping basket the add approver is not retaining in the shopping basket it get vanished.
    Can you tell what could be the problem. where we can check in the Workflow .(which function module to check (or) any ohter program
    G.Ganesh Kumar

    Hi,
    Which SRM version are you working on??
    Have a  look at these notes:
    Note 1127341 - Added Approver/Reviewer disappear on screen refresh
    Note 1175341 - document creator cannot add adhoc approver in shop-for case
    Note 423112 - Determining the processor assignment in an insertable step
    Note 414928 - 'Add approver' works wrong when second approver not found
    Note 382393 - Add approver/reviewer in Approval Preview
    BR,
    Disha.

  • Excise Duty not coming during sales order creation

    Hi Experts ,
    Sub:Excise is not calculaing on sales oder for a perticular  plant 
    We have configure a new manufacturing plant .in this the system not performing calcualtion on the base amount and no tax duty is coming .but for one plant its coming properly .
    Vk11
    J1id  is maitain properly
    Regards
    Sachin g

    Sorry, no idea what product you are using. Likely a classic SAP product of some sort. If it's Crystal Reports then you are using someones created report so check with that person or try the B1 forum possibly.

  • Speakerphone mode is not default during call - Tab...

    Sorry, i added this topic twice by mistake http://community.skype.com/t5/Android/Tablet-Lenovo-A-3500-doesn-t-turn-on-the-speakerphone-mode/td-...

    You are aware that there is a separate volume level for in-call, right? Did you try altering the volume level while talking to someone? You can set the in-call volume between 0 and 10.
    If this does not help, please let us know:
    1) phone model
    2) Windows Phone version
    3) Firmware version
    4) carrier/operator name
    Microsoft Windows Phone MVP
    Follow me on Twitter: @ThisIsMetro
    Press the 'Accept As Solution' icon if I have solved your problem, click on the Star Icon below if my advice has helped you!

  • Easy DMS | storage category

    Dear experts,
    is it possible to create a dependency between document types and storage categories in EDMS?
    EDMS has to select a storage category by default or inherited it from root folders?
    e.g.
    project documents => StorCat 01
    office documents => StorCat 02
    System: ECC 6.0 and EDMS 7.0
    Thanks in advance...

    HI,
    Check: Z_EASYDMS_GETNEWFILEDATA or BADI method - EASYDMS_MAIN01~GETNEWFILEDATA. You can change the storage category.
    check note: 830773
    Regards
    Surjit

  • DMS: Relocation of storage category/documents to another content server

    Our DMS content server was implemented with a 3-tier environment (dev-test-production).  There are several SAP instances storing documents to a shared storage category in the global DMS server.
    I found recently that one of the production instances on 4.6C was inadvertently configured to store documents on the DMS development server in this same storage category name.
    I have found programs to relocate documents from one storage category to another, which could be used if I configured a new storage category pointing to the production content server.  However, for business reasons the name of the storage category should not be changed.
    Note 445057 and document "Operational Guide - SAP Content Server" provide information on relocation of documents using programs RSCMSEX (export) and RSCSMIM (import).  I would like to use these reports to export the documents stored incorrectly on the development server, change the configuration of the content server to the production server address, and import the documents back into the existing storage category.
    The documentation does not state whether RSCMSIM will add the imported documents to existing content or will delete all other documents.  I would like to insure that I will not lose existing documents in the production server.
    I would appreciate advice or feedback from anyone familiar with these programs or relocating documents in DMS.
    Thank you,
    Kathie

    Hi Aby,
    if setting up a profile does not help, another proposal would be to use the BADI DOCUMENT_STORAGE01 with method BEFORE_LIST_STORAGECAT. Because here you will be able to influence which storage categories are displayed at "Check in to Kpro and Selecting the storage category". If only one category is handed back by the BADI method the pop-up should be avoided and the selected category should be choosen.
    Maybe this could be a solution.
    Best regards,
    Christoph

  • Batch split not happening during delivery

    Hi Experts,
    I have enough of stock for a material with different batches (with different expiry dates). When I create an order for qty 100, system confirms it on a certain date taking 1st batch nearest to expiry date, as per the search startegy set in the cponfiguration.Please note here that the batch that system picks has got only 50 qty in stock, but it shows whole 100 qty against it. Probably because Batch split is not possible during sales order creation, hence system showing whole qty against one batch.
    Now, when I try to create delivery, it shows only 50 qty of the same batch in delivery document. I select the line item and go to the "Batch split" tab to effect batch split, but system does not allow. It says "Batch already specified for material".Here is the detail for your analysis-
    Batch in item 000010 already specified for material 2000978
    Message no. VL221
    Diagnosis
    The batch was either predefined in the sales order that the delivery is based on or it was assigned to the delivery item manually. Therefore, you can no longer carry out a batch split for the delivery quantity of the items.
    Procedure
    In order to make the batch split possible, you can cancel the assignment of the delivery item to the batch, if the delivery's processing status allows.
    Kindly advise.
    Thanks in advance,
    Randhir

    Thanks Mr. P Gomatheeswaran,
    Now batch is not being determined during sales order creation, that's fine.While creating delivery,when I select line item and go to Batch split tab I can see batches being split in two, which is OK. But when I try to put Picking quantity as delivery qty, system says "Picked quantity is larger than the qty to be delivered".(Message no. VL019).
    Also Storage location field is grayed out.
    Regards,
    Randhir

  • Reconciliation account is not defaulting in the business partner screen

    Hi,
    I am working for IS-H Implementation Project right now. I am facing issue FI customer is not created during business partner creation. The error message appeared when I am trying to create business partner is "Required field Reconciliation acct has no entry".
    When I checked in the system parameters, for parameter AKTO_FR is maintained in the system. I tried to find any OSS notes related to this issue, but I can not find it.
    Anyone can help this issue ?
    Regards,
    Oscar

    Hi !
    Check whether you have configured the “Application Parameters” (ONK5).
    Make sure that you maintain reconciliation account for the following parameter values,
    AKTO_KTR  -
    Customer reconciliation account - insurance providers
    AKTO_SZ -
    Customer reconciliation account - patients
    Regards
    Subash Sankar

  • DMS Profile not selecting proper storage category

    We have defined two different profiles in DMS with two different security roles with the applications for each profile pointing to different storage categories.  If a user has only one of the security roles when they store an original it automatically selects the correct storage category.  If the user has both security roles it automatically selects one of the storage categories that may or may not be the correct storage category.
    Is there a way to have the list of available storage categories pop up when the user has both security roles?

    Dear Parin,
    based on your description I'm quite sure that in the customizing of your used document type (transaction DC10) the flag
    'Use KPRO' is set. If this flag is maintained then the originals could only be stored in the content server storage category which is not SAP-SYSTEM.
    SAP-SYSTEM is a non-KPRO storage alternative and I would recommend you to use the KPRO storage concept in DMS.
    Best regards,
    Christoph

  • Item Category Determination during Sales Order Creation

    Dear All,
    This is Chee Wee, i'm new to SD, i would like to seek for an advice for the topic related to Item Category in Sales Order.
    We have a material "Mat005" define as "Stocked" Item category in MM03,
    *Stocked means our MM team will always keep stock for the material.
    The document Item category assigned to this "Stocked" are "NoPo" --> No Auto Po creation and "AuPO" --> AutoPo  creation.
    During the SO creation for Mat005 with Qty50 and warehouse stock availale for 60, i'm wondering when i select the itemCat "AuPO", the system automatic changed to "NoPO", but when i change the order qty to 70, then i'm able to changed the itemcat to "AuPo"!!
    Can any Guru advice me what is the actual setting or configuration behind to determine which are the correct Item Category that i can choose?
    Thank you very much,
    Regards,
    Chee Wee

    Dear Chee Wee,
    Basically tem category will determine based on these key combination
    Item category group(Material master)+Sales document type(We enter doc type in the creation initial screen)--->Item category
    For Example NORM+OR--->TAN.
    As per your explanation
    I hope In your case stock availability also one of the factor in the item category determination.This might have done some enhancement.
    There are two item categories in your process one is  "NoPo" --> No Auto Po creation and  second one is "AuPO" --> AutoPo creation.
    the system automatic changed to "NoPO", but when i change the order qty to 70, then i'm able to changed the itemcat to "AuPo"!!
    60 Qty Stock available, you have created sales order for 50 Qt here requirement is less than stock availability so system will not allow you to enter AuPo item category Because there no need of auto PO creation for stock.
    When you created sales order for 70 Qty, stock available only 60 so there is shortage of 10 Qty.now system needs to allow to create Auto PO to procure the shortage stock of 10 Qty, so system is allowing you to enter Yaupon item category.
    There might be some work around kind take help of ABAPer to get the more details about your item category determination.
    I hope this will help you,
    Regards,
    Murali.

  • DMS storage category

    Hi Friends,
    While designing Storage system we have the following option. We have much space in our DEV / QA server. So we don't want to make one more server for this DMS storage category.
    In this case we want to do the settings in the following way while creating content repository in T.code: OAC0.
    Storage type - 03 SAP system data base
    Rep. Sub type -  Normal
    Version no : 0045
    Contents table: DMS_CONT1_CD1
    While creating doc. type in the field Use KPro- we will tick the check box.
    We want to use KPro of SAP server itself. If after some time we will add space if required by adding more hard disk.
    Is it OK. Pls. suggest.
    Regards,
    Sai Krishna

    Hi Sai Krishna,
    There are a two 'storage types' option available. Each one has specific advantage over the other. The storage category is selected on the basis of how the document in question is intended to be used:
    1. 'DMS Storage' option(may store in SAP DB,Vault,RFC Archive), you can store upto 2 originals/documents and 99 additional files.
    2. 'kPro Storage' option(may store in SAP DB,HTTP Content Server), you can store 'n' originals/documents and 'n' additional files
    NOTE:
    HTTP content server
    Either the SAP Content Server or a third-party content server can be used.
    SAP database
    If you want to store your documents in the SAP database, you need a content table. All your documents are stored in this table in the SAP database. The content table can be cross-client or client-specific.
    RFC storage system
    RFC is used for storing documents in the context of SAP ArchiveLink, the Archive Development Kit (ADK), or an application that uses SAP ArchiveLink or the ADK.
    Logical repository
    This storage type allows access to different physical repositories, independently of the client and the system ID.
    Structure repository
    The structure repository is only used in the SAP Knowledge Warehouse environment.
    In summary, suggest you decide upon your storage category once the rationale behind the usage of the document is agreed upon by all the stakeholders.
    P.S. Regarding your question specifically, what you intend to do also holds good.
    Regards,
    Pradeepkumar Haragoldavar
    Edited by: Pradeepkumar  Haragoldavar on May 10, 2010 6:54 AM

Maybe you are looking for