Load share/ Redundant configuration question
I have a question regarding load balance/redundant configuration. The topology of the network are two redundant links (1 Gig each). One end of the links are two Nortel Passport switches; the other end of the links are two Cisco Catalyst 6509 switches.
Do anyone has any experience on how to implement a load balance or standby/active config on this cross-platform settings?
I just want to throw out two ways here if it would work.
1. Use GLBP or HRSP on the Cisco side; and use whatever protocols it supports on the Nortel side. Would this work?
2. Implement universal routing protocols such as OSPF on the two links; does OSPF automatically sort of load balance these two links; if one link down, would other link pick up all the traffic?
Thanks.
Hi there,
The weapon of choice here is VRRP. VRRP is very similar to HSRP, but is an open standard - HSRP is Cisco's own. The command set is nearly identical to HSRP, if you go to an interface and type "vrrp ?" you'll see some very familiar commands.
Not too sure what your passports are, but I know that the 8600's run VRRP, so I guess they all should.
As for the routing, OSPF would find both ways to get through the network and then either traverse one link or load-balance over them, depending on what you tell it to do.
If you have 2 Nortel boxes and then 2 Cisco boxes with networks behind the two, then I would run OSPF every time.
Hope that helps - if it does, let me know.
LH
Similar Messages
-
Configuration question on css11506
Hi
One of our vip with 4 local servers, currently has https. the http is redirected to https.
Now, my client has problem which a seriel directories need use http, not https. some thing like. quistion:
1. If there is any possible, I can configure the vip to filter the special directories and let them to use http not https. and rest pages and directories redirect to https?
2. If not, I can make another vip to use same local servers, but, is possible to only limited to special directories? and with wild code? some like the directories are partially wild coded, something like, http://web.domain/casedir*/casenumber?
3. if not on both option, is any way I can fix this problem?
Any comments will be appreciated
Thanks in advance
JulieI run my Tangosol cluster with 12 nodes on 3
machines(each machine with 4 cache server nodes). I
have 2 important configuration questions. Appreciate
if you can answer them ASAP.
- My requirement is that I need only 10000 objects to
be in cluster so that the resources can be freed upon
when other caches are loaded. I configured the
<high-units> to be 10000 but I am not sure if this is
per node or for the whole cluster. I see that the
total number of objects in the cluster goes till
15800 objects even when I configured for the 10K as
high-units (there is some free memory on servers in
this case). Can you please explain this?
It is per backing map, which is practically per node in case of distributed caches.
- Is there an easy way to know the memory stats of
the cluster? The memory command on the cluster
doesn't seem to be giving me the correct stats. Is
there any other utility that I can use?
Yes, you can get this and quite a number of other information via JMX. Please check this wiki page for more information.
I started all the nodes with the same configuration
as below. Can you please answer the above questions
ASAP?
<distributed-scheme>
<scheme-name>TestScheme</scheme-name>
<service-name>DistributedCache</service-name>
<backing-map-scheme>
<local-scheme>
<high-units>10000</high-units>
<eviction-policy>LRU</eviction-policy>
<expiry-delay>1d</expiry-delay>
<flush-delay>1h</flush-delay>
</local-scheme>
</backing-map-scheme>
</distributed-scheme>
Thanks
RaviBest regards,
Robert -
Configuration Question on local-scheme and high-units
I run my Tangosol cluster with 12 nodes on 3 machines(each machine with 4 cache server nodes). I have 2 important configuration questions. Appreciate if you can answer them ASAP.
- My requirement is that I need only 10000 objects to be in cluster so that the resources can be freed upon when other caches are loaded. I configured the <high-units> to be 10000 but I am not sure if this is per node or for the whole cluster. I see that the total number of objects in the cluster goes till 15800 objects even when I configured for the 10K as high-units (there is some free memory on servers in this case). Can you please explain this?
- Is there an easy way to know the memory stats of the cluster? The memory command on the cluster doesn't seem to be giving me the correct stats. Is there any other utility that I can use?
I started all the nodes with the same configuration as below. Can you please answer the above questions ASAP?
<distributed-scheme>
<scheme-name>TestScheme</scheme-name>
<service-name>DistributedCache</service-name>
<backing-map-scheme>
<local-scheme>
<high-units>10000</high-units>
<eviction-policy>LRU</eviction-policy>
<expiry-delay>1d</expiry-delay>
<flush-delay>1h</flush-delay>
</local-scheme>
</backing-map-scheme>
</distributed-scheme>
Thanks
RaviI run my Tangosol cluster with 12 nodes on 3
machines(each machine with 4 cache server nodes). I
have 2 important configuration questions. Appreciate
if you can answer them ASAP.
- My requirement is that I need only 10000 objects to
be in cluster so that the resources can be freed upon
when other caches are loaded. I configured the
<high-units> to be 10000 but I am not sure if this is
per node or for the whole cluster. I see that the
total number of objects in the cluster goes till
15800 objects even when I configured for the 10K as
high-units (there is some free memory on servers in
this case). Can you please explain this?
It is per backing map, which is practically per node in case of distributed caches.
- Is there an easy way to know the memory stats of
the cluster? The memory command on the cluster
doesn't seem to be giving me the correct stats. Is
there any other utility that I can use?
Yes, you can get this and quite a number of other information via JMX. Please check this wiki page for more information.
I started all the nodes with the same configuration
as below. Can you please answer the above questions
ASAP?
<distributed-scheme>
<scheme-name>TestScheme</scheme-name>
<service-name>DistributedCache</service-name>
<backing-map-scheme>
<local-scheme>
<high-units>10000</high-units>
<eviction-policy>LRU</eviction-policy>
<expiry-delay>1d</expiry-delay>
<flush-delay>1h</flush-delay>
</local-scheme>
</backing-map-scheme>
</distributed-scheme>
Thanks
RaviBest regards,
Robert -
SAP-JEE, SAP_BUILDT, and SAP_JTECHS and Dev Configuration questions
Hi experts,
I am configuring NWDI for our environment and have a few questions that I'm trying to get my arms around.
I've read we need to check-in SAP-JEE, SAP_BUILDT, and SAP_JTECHS as required components, but I'm confused on the whole check-in vs. import thing.
I placed the 3 files in the correct OS directory and checked them in via the check-in tab on CMS. Next, the files show up in the import queue for the DEV tab. My questions are what do I do next?
1. Do I import them into DEV? If so, what is this actually doing? Is it importing into the actual runtime system (i.e. DEV checkbox and parameters as defined in the landscape configurator for this track)? Or is just importing the file into the DEV buildspace of NWDI system?
2. Same question goes for the Consolidation tab. Do I import them in here as well?
3. Do I need to import them into the QA and Prod systems too? Or do I remove them from the queue?
Development Configuration questions ***
4. When I download the development configuration, I can select DEV or CON workspace. What is the difference? Does DEV point to the sandbox (or central development) runtime system and CONS points to the configuration runtime system as defined in the landscape configurator? Or is this the DEV an CON workspace/buildspace of the NWDI sytem.
5. Does the selection here dictate the starting point for the development? What is an example scenarios when I would choose DEV vs. CON?
6. I have heard about the concept of a maintenance track and a development track. What is the difference and how do they differ from a setup perspective? When would a Developer pick one over the over?
Thanks for any advice
-DaveHi David,
"Check-In" makes SCA known to CMS, "import" will import the content of the SCAs into CBS/DTR.
1. Yes. For these three SCAs specifically (they only contain buildarchives, no sources, no deployarchives) the build archives are imported into the dev buildspace on CBS. If the SCAs contain deployarchives and you have a runtime system configured for the dev system then those deployarchives should get deployed onto the runtime system.
2. Have you seen /people/marion.schlotte/blog/2006/03/30/best-practices-for-nwdi-track-design-for-ongoing-development ? Sooner or later you will want to.
3. Should be answered indirectly.
4. Dev/Cons correspond to the Dev/Consolidation system in CMS. For each developed SC you have 2 systems with 2 workspaces in DTR for each (inactive/active)
5. You should use dev. I would only use cons for corrections if they can't be done in dev and transported. Note that you will get conflicts in DTR if you do parallel changes in dev and cons.
6. See link in No.2 ?
Regards,
Marc -
Unable to load the pof configuration
Hi all,
Im trying with pof serialization but unable to load the pof configuration and im getting the following error:
2013-06-12 14:41:49,582 [catalina-exec-1] ERROR com.distcachedemo.KnCacheDemoServlet - doPost(HttpServletRequest, HttpServletResponse)::
(Wrapped) java.io.NotSerializableException: com.distcachedemo.dto.KnMasterListResponse
at com.tangosol.util.ExternalizableHelper.toBinary(ExternalizableHelper.java:215)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$ConverterValueToBinary.convert(PartitionedCache.CDB:3)
at com.tangosol.util.ConverterCollections$ConverterMap.put(ConverterCollections.java:1674)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.partitionedService.PartitionedCache$ViewMap.put(PartitionedCache.CDB:1)
at com.tangosol.coherence.component.util.SafeNamedCache.put(SafeNamedCache.CDB:1)
at com.distcachedemo.KnCorpContactInfoControllerDemo.getMasterList(KnCorpContactInfoControllerDemo.java:183)
at com.distcachedemo.KnCacheDemoServlet.doPost(KnCacheDemoServlet.java:60)
Please let me know what is that i need to correct in the below:
NOTE: I have also tried Proxy Scheme with serializer and it too did not work
From the coherence logs i havent found the log of POF configuration being loaded.
Configuration used:
============
cache-config.xml::
<?xml version="1.0"?>
<cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config"
xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd">
<caching-scheme-mapping>
<cache-mapping>
<cache-name>nspoc*</cache-name>
<scheme-name>distributed-ns</scheme-name>
</cache-mapping>
<cache-mapping>
<cache-name>poc*</cache-name>
<scheme-name>distributed</scheme-name>
</cache-mapping>
</caching-scheme-mapping>
<caching-schemes>
<distributed-scheme>
<scheme-name>distributed-ns</scheme-name>
<service-name>DistributedCache-ns</service-name>
<thread-count>4</thread-count>
<request-timeout>60s</request-timeout>
<backing-map-scheme>
<external-scheme>
<nio-memory-manager>
<initial-size>1MB</initial-size>
<maximum-size>100MB</maximum-size>
</nio-memory-manager>
<high-units>100</high-units>
<unit-calculator>BINARY</unit-calculator>
<unit-factor>1048576</unit-factor>
</external-scheme>
</backing-map-scheme>
<autostart>true</autostart>
</distributed-scheme>
<distributed-scheme>
<scheme-name>distributed</scheme-name>
<service-name>DistributedCache</service-name>
<thread-count>4</thread-count>
<request-timeout>60s</request-timeout>
<backing-map-scheme>
<read-write-backing-map-scheme>
<internal-cache-scheme>
<external-scheme>
<nio-memory-manager>
<initial-size>1MB</initial-size>
<maximum-size>100MB</maximum-size>
</nio-memory-manager>
<high-units>100</high-units>
<unit-calculator>BINARY</unit-calculator>
<unit-factor>1048576</unit-factor>
</external-scheme>
</internal-cache-scheme>
<cachestore-scheme>
<class-scheme>
<class-factory-name>com.distcachedemo.KnPocCacheStoreFactory</class-factory-name>
<method-name>loadCacheStore</method-name>
<init-params>
<init-param>
<param-type>java.lang.String</param-type>
<param-value>{cache-name}</param-value>
</init-param>
</init-params>
</class-scheme>
</cachestore-scheme>
</read-write-backing-map-scheme>
</backing-map-scheme>
<autostart>true</autostart>
</distributed-scheme>
<local-scheme>
<scheme-name>LocalSizeLimited</scheme-name>
<eviction-policy>LRU</eviction-policy>
<high-units>5000</high-units>
<expiry-delay>1h</expiry-delay>
</local-scheme>
</caching-schemes>
</cache-config>
=================
tangosol-coherence-override.xml:
<?xml version='1.0'?>
<coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config"
xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-operational-config coherence-operational-config.xsd">
<cluster-config>
<member-identity>
<cluster-name system-property="tangosol.coherence.cluster">kn_test</cluster-name>
</member-identity>
<unicast-listener>
<well-known-addresses>
<socket-address id="719">
<address>192.168.7.19</address>
<port>8088</port>
</socket-address>
<socket-address id="3246">
<address>192.168.3.246</address>
<port>8088</port>
</socket-address>
<socket-address id="77">
<address>192.168.7.7</address>
<port>8088</port>
</socket-address>
</well-known-addresses>
<address system-property="tangosol.coherence.localhost">192.168.7.7</address>
<port system-property="tangosol.coherence.localport">8088</port>
<port-auto-adjust system-property="tangosol.coherence.localport.adjust">true</port-auto-adjust>
</unicast-listener>
<serializers>
<serializer id="java">
<class-name>com.tangosol.io.DefaultSerializer</class-name>
</serializer>
<serializer id="pof">
<class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
<init-params>
<init-param>
<param-type>String</param-type>
<param-value>pof-config.xml</param-value>
</init-param>
</init-params>
</serializer>
</serializers>
</cluster-config>
<configurable-cache-factory-config>
<init-params>
<init-param>
<param-type>java.lang.String</param-type>
<param-value system-property="tangosol.coherence.cacheconfig">cache-config.xml</param-value>
</init-param>
</init-params>
</configurable-cache-factory-config>
</coherence>
======================
pof-config.xml:
<?xml version='1.0'?>
<pof-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns="http://xmlns.oracle.com/coherence/coherence-pof-config"
xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-pof-config
coherence-pof-config.xsd">
<user-type-list>
<include>coherence-pof-config.xml</include>
<user-type>
<type-id>1001</type-id>
<class-name>com.distcachedemo.dto.KnMasterListResponse</class-name>
</user-type>
</user-type-list>
</pof-config>
============
Java Code:
package com.distcachedemo.dto;
import com.tangosol.io.pof.PofReader;
import com.tangosol.io.pof.PofWriter;
import com.tangosol.io.pof.PortableObject;
import java.io.IOException;
import java.io.Serializable;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.Map;
public class KnMasterListResponse implements PortableObject {
private static final long serialVersionUID = -9114918011531875153L;
private KnHierarchyListDTO hierarchyListDTO;
private Map subsMap;
public KnHierarchyListDTO getHierarchyListDTO() {
return hierarchyListDTO;
public void setHierarchyListDTO(KnHierarchyListDTO hierarchyListDTO) {
this.hierarchyListDTO = hierarchyListDTO;
public Map getSubsMap() {
return subsMap;
public void setSubsMap(Map subsMap) {
this.subsMap = subsMap;
public String toString() {
StringBuilder strBuffer = new StringBuilder(100);
if (hierarchyListDTO.getHierachyDTOs() != null) {
strBuffer.append(" hierarchyListDTO - ").append(hierarchyListDTO.getHierachyDTOs().size());
if (subsMap != null) {
strBuffer.append(" subsMap - ").append(subsMap.size());
return strBuffer.toString();
@Override
public void readExternal(PofReader pofReader) throws IOException {
subsMap = pofReader.readMap(0, new HashMap<>());
hierarchyListDTO = (KnHierarchyListDTO) pofReader.readObject(1);
@Override
public void writeExternal(PofWriter pofWriter) throws IOException {
pofWriter.writeMap(0, subsMap);
pofWriter.writeObject(1, hierarchyListDTO);
Thanks,
Ravi ShankerHi Ravi,
it is generally recommended that all new classes support POF in this or that way. This or that means either implementing PortableObject or to provide a PofSerializer implementation to be able to optimally serialize their state. Obviously it is not always possible but you should try to achieve that.
If it is not possible, then you can still convert them to byte[] or String by some other means. If that other means is Java serialization, then Coherence provides a different PofContext implementation (SafeConfigurablePofContext) which you can use instead of ConfigurablePofContext and which is able to fall back to Java serialization for Serializable classes (and also for types not registered in POF configuration but which implement PortableObject). However it is recommended that SafeConfigurablePofContext is not used in production, typically because Java serialization is generally inferior to POF serialization when looking at performance and serialized size, and SafeConfigurablePofContext does not force you to do the right thing, and finally because Java serialization is not platform-independent whereas POF is.
Best regards,
Rob -
Please share UCS interview questions and answers
Please share UCS interview questions and answers
Since last 4+ year I am working on CISCO UCS, VMware Administration tasks on reputed company and even I have attended DCUCI 4.0 training on almost 4 year back.
Now I am looking for job change on UCS, VMware and VBLOCK Architecture position so if any related documents please share with me. -
CMS Transport Load Software Component Configuration
Hi All,
We have done the Development SOAP to RFC scenario. We are trying to move the chages to QA system through CMS transport, we did't find the SC in Add SC, for that we have updated the CMS and tried to load SC configuration while doing this we are getting
Load Software Component Configuration
Enter Name of SC configuration file
Last month i have done one created one track at that time i did't get this type of message.
could you pls suggest me what i need to give here.
Thanks,
VenkatHi ,
With in the CMS ,you need to add your Software Component from which you need to transport the Repositiry Object.
Software Component Can be added from Landscape Configurator of the CMS.
Hopes this helps.
Thanks,
Madhu -
Unable to load Custom POF Configuration
I successfully downloaded Coherence 3.7 and able to run the from .Net client with default Coherence.POF.Config file. As soon as I'm changing the POF file configuration to custom type, I'm unable to run the Coherence server. And finding the following error. Any Help is appreciated.
2014-07-25 18:31:07.130/4.930 Oracle Coherence GE 3.7.1.0 <Info> (thread=Cluster
, member=1): Loaded POF configuration from "jar:file:/C:/OracleCoherence/coheren
ce/lib/coherence.jar!/pof-config.xml"
2014-07-25 18:31:07.147/4.947 Oracle Coherence GE 3.7.1.0 <Info> (thread=Cluster
, member=1): Loaded included POF configuration from "jar:file:/C:/OracleCoherenc
e/coherence/lib/coherence.jar!/coherence-pof-config.xml"
2014-07-25 18:31:07.194/4.994 Oracle Coherence GE 3.7.1.0 <D5> (thread=Invocatio
n:Management, member=1): Service Management joined the cluster with senior servi
ce member 1
2014-07-25 18:31:07.401/5.201 Oracle Coherence GE 3.7.1.0 <D5> (thread=Distribut
edCache, member=1): Service DistributedCache joined the cluster with senior serv
ice member 1
Exception in thread "main" (Wrapped: error creating class "com.tangosol.io.pof.C
onfigurablePofContext") (Wrapped: Failed to load POF configuration: Mypof-config
.xml) java.io.IOException: The POF configuration is missing: "Mypof-config.xml",
loader=sun.misc.Launcher$AppClassLoader@1b90b39
at com.tangosol.io.ConfigurableSerializerFactory.createSerializer(Config
urableSerializerFactory.java:46)
at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.i
nstantiateSerializer(Service.CDB:1)
at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.e
nsureSerializer(Service.CDB:32)
at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.e
nsureSerializer(Service.CDB:4)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.g
rid.ProxyService.configure(ProxyService.CDB:101)
at com.tangosol.coherence.component.util.SafeService.startService(SafeSe
rvice.CDB:17)
at com.tangosol.coherence.component.util.SafeService.ensureRunningServic
e(SafeService.CDB:27)
at com.tangosol.coherence.component.util.SafeService.start(SafeService.C
DB:14)
at com.tangosol.net.DefaultConfigurableCacheFactory.ensureServiceInterna
l(DefaultConfigurableCacheFactory.java:1105)
at com.tangosol.net.DefaultConfigurableCacheFactory.ensureService(Defaul
tConfigurableCacheFactory.java:937)
at com.tangosol.net.DefaultCacheServer.startServices(DefaultCacheServer.
java:81)
at com.tangosol.net.DefaultCacheServer.intialStartServices(DefaultCacheS
erver.java:250)
at com.tangosol.net.DefaultCacheServer.startAndMonitor(DefaultCacheServe
r.java:55)
at com.tangosol.net.DefaultCacheServer.main(DefaultCacheServer.java:197)
Caused by: (Wrapped: Failed to load POF configuration: Mypof-config.xml) java.io
.IOException: The POF configuration is missing: "Mypof-config.xml", loader=sun.m
isc.Launcher$AppClassLoader@1b90b39
at com.tangosol.util.Base.ensureRuntimeException(Base.java:288)
at com.tangosol.run.xml.XmlHelper.loadResourceInternal(XmlHelper.java:34
1)
at com.tangosol.run.xml.XmlHelper.loadFileOrResource(XmlHelper.java:283)
at com.tangosol.io.pof.ConfigurablePofContext.createPofConfig(Configurab
lePofContext.java:835)
at com.tangosol.io.pof.ConfigurablePofContext.initialize(ConfigurablePof
Context.java:797)
at com.tangosol.io.pof.ConfigurablePofContext.setContextClassLoader(Conf
igurablePofContext.java:322)
at com.tangosol.io.ConfigurableSerializerFactory.createSerializer(Config
urableSerializerFactory.java:42)
... 13 more
Caused by: java.io.IOException: The POF configuration is missing: "Mypof-config.
xml", loader=sun.misc.Launcher$AppClassLoader@1b90b39
at com.tangosol.run.xml.XmlHelper.loadResourceInternal(XmlHelper.java:31
8)
... 18 morePlease find the total log on the Cache-server.cmd
Service=ClusterService{Name=Cluster, State=(SERVICE_STARTED, STATE_ANNOUNCE),
Id=0, Version=3.7.1}
ToMemberSet=null
NotifySent=false
LastRecvTimestamp=none
MemberSet=MemberSet(Size=1, ids=[1])
Terminate batch job (Y/N)? Y
C:\OracleCoherence\coherence\bin>cache-server
java version "1.7.0_60"
Java(TM) SE Runtime Environment (build 1.7.0_60-b19)
Java HotSpot(TM) Client VM (build 24.60-b09, mixed mode, sharing)
2014-08-05 13:25:25.753/0.327 Oracle Coherence 3.7.1.0 <Info> (thread=main, memb
er=n/a): Loaded operational configuration from "jar:file:/C:/OracleCoherence/coh
erence/lib/coherence.jar!/tangosol-coherence.xml"
2014-08-05 13:25:25.816/0.390 Oracle Coherence 3.7.1.0 <Info> (thread=main, memb
er=n/a): Loaded operational overrides from "jar:file:/C:/OracleCoherence/coheren
ce/lib/coherence.jar!/tangosol-coherence-override-dev.xml"
2014-08-05 13:25:25.816/0.390 Oracle Coherence 3.7.1.0 <D5> (thread=main, member
=n/a): Optional configuration override "/tangosol-coherence-override.xml" is not
specified
2014-08-05 13:25:25.816/0.390 Oracle Coherence 3.7.1.0 <D5> (thread=main, member
=n/a): Optional configuration override "/custom-mbeans.xml" is not specified
Oracle Coherence Version 3.7.1.0 Build 27797
Grid Edition: Development mode
Copyright (c) 2000, 2011, Oracle and/or its affiliates. All rights reserved.
2014-08-05 13:25:25.972/0.546 Oracle Coherence GE 3.7.1.0 <Info> (thread=main, m
ember=n/a): Loaded cache configuration from "file:/C:/OracleCoherence/coherence/
lib/cache-config.xml"; this document does not refer to any schema definition and
has not been validated.
2014-08-05 13:25:26.440/1.014 Oracle Coherence GE 3.7.1.0 <Warning> (thread=main
, member=n/a): Local address "127.0.0.1" is a loopback address; this cluster nod
e will not connect to nodes located on different machines
2014-08-05 13:25:26.830/1.404 Oracle Coherence GE 3.7.1.0 <D4> (thread=main, mem
ber=n/a): TCMP bound to /127.0.0.1:8088 using SystemSocketProvider
2014-08-05 13:25:30.558/5.132 Oracle Coherence GE 3.7.1.0 <Info> (thread=Cluster
, member=n/a): Created a new cluster "cluster:0xFCC1" with Member(Id=1, Timestam
p=2014-08-05 13:25:27.251, Address=127.0.0.1:8088, MachineId=45419, Location=sit
e:,process:18420, Role=CoherenceServer, Edition=Grid Edition, Mode=Development,
CpuCount=2, SocketCount=2) UID=0x7F00000100000147A52A6E93B16B1F98
2014-08-05 13:25:30.558/5.132 Oracle Coherence GE 3.7.1.0 <Info> (thread=main, m
ember=n/a): Started cluster Name=cluster:0xFCC1
Group{Address=224.3.7.0, Port=37000, TTL=0}
MasterMemberSet(
ThisMember=Member(Id=1, Timestamp=2014-08-05 13:25:27.251, Address=127.0.0.1:8
088, MachineId=45419, Location=site:,process:18420, Role=CoherenceServer)
OldestMember=Member(Id=1, Timestamp=2014-08-05 13:25:27.251, Address=127.0.0.1
:8088, MachineId=45419, Location=site:,process:18420, Role=CoherenceServer)
ActualMemberSet=MemberSet(Size=1
Member(Id=1, Timestamp=2014-08-05 13:25:27.251, Address=127.0.0.1:8088, Mach
ineId=45419, Location=site:,process:18420, Role=CoherenceServer)
MemberId|ServiceVersion|ServiceJoined|MemberState
1|3.7.1|2014-08-05 13:25:30.558|JOINED
RecycleMillis=1200000
RecycleSet=MemberSet(Size=0
TcpRing{Connections=[]}
IpMonitor{AddressListSize=0}
2014-08-05 13:25:30.620/5.194 Oracle Coherence GE 3.7.1.0 <Error> (thread=Cluste
r, member=1): StopRunning ClusterService{Name=Cluster, State=(SERVICE_STARTED, S
TATE_JOINED), Id=0, Version=3.7.1, OldestMemberId=1} due to unhandled exception:
Exception in thread "main" (Wrapped: Failed to start Service "Management" (Servi
ceState=SERVICE_STOPPED)) java.lang.RuntimeException: Failed to start Service "C
luster" (ServiceState=SERVICE_STOPPED, STATE_JOINED)
at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.s
tart(Service.CDB:38)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.G
rid.start(Grid.CDB:6)
at com.tangosol.coherence.component.util.SafeService.startService(SafeSe
rvice.CDB:39)
at com.tangosol.coherence.component.util.SafeService.ensureRunningServic
e(SafeService.CDB:27)
at com.tangosol.coherence.component.util.SafeService.start(SafeService.C
DB:14)
at com.tangosol.coherence.component.net.management.Connector.startServic
e(Connector.CDB:58)
at com.tangosol.coherence.component.net.management.gateway.Remote.regist
erLocalModel(Remote.CDB:10)
at com.tangosol.coherence.component.net.management.Gateway.register(Gate
way.CDB:6)
at com.tangosol.coherence.component.util.SafeCluster.ensureRunningCluste
r(SafeCluster.CDB:46)
at com.tangosol.coherence.component.util.SafeCluster.start(SafeCluster.C
DB:2)
at com.tangosol.net.CacheFactory.ensureCluster(CacheFactory.java:427)
at com.tangosol.net.DefaultConfigurableCacheFactory.ensureServiceInterna
l(DefaultConfigurableCacheFactory.java:968)
at com.tangosol.net.DefaultConfigurableCacheFactory.ensureService(Defaul
tConfigurableCacheFactory.java:937)
at com.tangosol.net.DefaultCacheServer.startServices(DefaultCacheServer.
java:81)
at com.tangosol.net.DefaultCacheServer.intialStartServices(DefaultCacheS
erver.java:250)
at com.tangosol.net.DefaultCacheServer.startAndMonitor(DefaultCacheServe
r.java:55)
at com.tangosol.net.DefaultCacheServer.main(DefaultCacheServer.java:197)
Caused by: java.lang.RuntimeException: Failed to start Service "Cluster" (Servic
eState=SERVICE_STOPPED, STATE_JOINED)
at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.w
aitAcceptingClients(Service.CDB:12)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.G
rid.poll(Grid.CDB:9)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.G
rid.poll(Grid.CDB:11)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.g
rid.ClusterService.ensureService(ClusterService.CDB:15)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.g
rid.ClusterService.doServiceJoining(ClusterService.CDB:47)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.G
rid.onServiceState(Grid.CDB:23)
at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.s
etServiceState(Service.CDB:8)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.G
rid.setServiceState(Grid.CDB:21)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.G
rid$NotifyStartup.onReceived(Grid.CDB:3)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.G
rid.onMessage(Grid.CDB:34)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.G
rid.onNotify(Grid.CDB:33)
at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
at java.lang.Thread.run(Unknown Source)
2014-08-05 13:25:30.620/5.194 Oracle Coherence GE 3.7.1.0 <Error> (thread=Cluste
r, member=1):
(Wrapped: Failed to load POF configuration: C:\OracleCoherence\coherence\lib\Myp
ofconfig.xml) java.io.IOException: The POF configuration is missing: "C:\OracleC
oherence\coherence\lib\Mypofconfig.xml", loader=sun.misc.Launcher$AppClassLoader
@1b90b39
at com.tangosol.util.Base.ensureRuntimeException(Base.java:288)
at com.tangosol.run.xml.XmlHelper.loadResourceInternal(XmlHelper.java:34
1)
at com.tangosol.run.xml.XmlHelper.loadFileOrResource(XmlHelper.java:283)
at com.tangosol.io.pof.ConfigurablePofContext.createPofConfig(Configurab
lePofContext.java:835)
at com.tangosol.io.pof.ConfigurablePofContext.initialize(ConfigurablePof
Context.java:797)
at com.tangosol.io.pof.ConfigurablePofContext.setContextClassLoader(Conf
igurablePofContext.java:322)
at com.tangosol.util.ExternalizableHelper.ensureSerializer(Externalizabl
eHelper.java:291)
at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.e
nsureSerializer(Service.CDB:28)
at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.e
nsureSerializer(Service.CDB:4)
at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.w
riteObject(Service.CDB:1)
at com.tangosol.coherence.component.net.Message.writeObject(Message.CDB:
1)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.g
rid.ClusterService$ServiceJoining.write(ClusterService.CDB:8)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.G
rid.serializeMessage(Grid.CDB:14)
at com.tangosol.coherence.component.util.daemon.queueProcessor.packetPro
cessor.PacketPublisher.packetizeMessage(PacketPublisher.CDB:17)
at com.tangosol.coherence.component.util.daemon.queueProcessor.packetPro
cessor.PacketPublisher$InQueue.add(PacketPublisher.CDB:11)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.G
rid.dispatchMessage(Grid.CDB:62)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.G
rid.post(Grid.CDB:31)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.G
rid.send(Grid.CDB:1)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.g
rid.ClusterService$ServiceJoinRequest.proceed(ClusterService.CDB:35)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.g
rid.ClusterService.validateNewService(ClusterService.CDB:88)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.g
rid.ClusterService$ServiceJoinRequest.onReceived(ClusterService.CDB:66)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.G
rid.onMessage(Grid.CDB:34)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.G
rid.onNotify(Grid.CDB:33)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.g
rid.ClusterService.onNotify(ClusterService.CDB:3)
at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
at java.lang.Thread.run(Unknown Source)
Caused by: java.io.IOException: The POF configuration is missing: "C:\OracleCohe
rence\coherence\lib\Mypofconfig.xml", loader=sun.misc.Launcher$AppClassLoader@1b
90b39
at com.tangosol.run.xml.XmlHelper.loadResourceInternal(XmlHelper.java:31
8)
... 24 more
2014-08-05 13:25:30.620/5.194 Oracle Coherence GE 3.7.1.0 <D5> (thread=Cluster,
member=n/a): Service Cluster left the cluster
2014-08-05 13:25:30.620/5.194 Oracle Coherence GE 3.7.1.0 <Error> (thread=Invoca
tion:Management, member=n/a): Terminating InvocationService due to unhandled exc
eption: java.lang.RuntimeException
2014-08-05 13:25:30.620/5.194 Oracle Coherence GE 3.7.1.0 <Error> (thread=Invoca
tion:Management, member=n/a):
java.lang.RuntimeException: Failed to start Service "Cluster" (ServiceState=SERV
ICE_STOPPED, STATE_JOINED)
at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.w
aitAcceptingClients(Service.CDB:12)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.G
rid.poll(Grid.CDB:9)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.G
rid.poll(Grid.CDB:11)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.g
rid.ClusterService.ensureService(ClusterService.CDB:15)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.g
rid.ClusterService.doServiceJoining(ClusterService.CDB:47)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.G
rid.onServiceState(Grid.CDB:23)
at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.s
etServiceState(Service.CDB:8)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.G
rid.setServiceState(Grid.CDB:21)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.G
rid$NotifyStartup.onReceived(Grid.CDB:3)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.G
rid.onMessage(Grid.CDB:34)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.G
rid.onNotify(Grid.CDB:33)
at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
at java.lang.Thread.run(Unknown Source)
2014-08-05 13:25:30.620/5.194 Oracle Coherence GE 3.7.1.0 <D5> (thread=Invocatio
n:Management, member=n/a): Service Management left the cluster
2014-08-05 13:25:30.636/5.210 Oracle Coherence GE 3.7.1.0 <Error> (thread=main,
member=n/a): Error while starting service "Management": (Wrapped: Failed to star
t Service "Management" (ServiceState=SERVICE_STOPPED)) java.lang.RuntimeExceptio
n: Failed to start Service "Cluster" (ServiceState=SERVICE_STOPPED, STATE_JOINED
at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.s
tart(Service.CDB:38)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.G
rid.start(Grid.CDB:6)
at com.tangosol.coherence.component.util.SafeService.startService(SafeSe
rvice.CDB:39)
at com.tangosol.coherence.component.util.SafeService.ensureRunningServic
e(SafeService.CDB:27)
at com.tangosol.coherence.component.util.SafeService.start(SafeService.C
DB:14)
at com.tangosol.coherence.component.net.management.Connector.startServic
e(Connector.CDB:58)
at com.tangosol.coherence.component.net.management.gateway.Remote.regist
erLocalModel(Remote.CDB:10)
at com.tangosol.coherence.component.net.management.Gateway.register(Gate
way.CDB:6)
at com.tangosol.coherence.component.util.SafeCluster.ensureRunningCluste
r(SafeCluster.CDB:46)
at com.tangosol.coherence.component.util.SafeCluster.start(SafeCluster.C
DB:2)
at com.tangosol.net.CacheFactory.ensureCluster(CacheFactory.java:427)
at com.tangosol.net.DefaultConfigurableCacheFactory.ensureServiceInterna
l(DefaultConfigurableCacheFactory.java:968)
at com.tangosol.net.DefaultConfigurableCacheFactory.ensureService(Defaul
tConfigurableCacheFactory.java:937)
at com.tangosol.net.DefaultCacheServer.startServices(DefaultCacheServer.
java:81)
at com.tangosol.net.DefaultCacheServer.intialStartServices(DefaultCacheS
erver.java:250)
at com.tangosol.net.DefaultCacheServer.startAndMonitor(DefaultCacheServe
r.java:55)
at com.tangosol.net.DefaultCacheServer.main(DefaultCacheServer.java:197)
Caused by: java.lang.RuntimeException: Failed to start Service "Cluster" (Servic
eState=SERVICE_STOPPED, STATE_JOINED)
at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.w
aitAcceptingClients(Service.CDB:12)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.G
rid.poll(Grid.CDB:9)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.G
rid.poll(Grid.CDB:11)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.g
rid.ClusterService.ensureService(ClusterService.CDB:15)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.g
rid.ClusterService.doServiceJoining(ClusterService.CDB:47)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.G
rid.onServiceState(Grid.CDB:23)
at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.s
etServiceState(Service.CDB:8)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.G
rid.setServiceState(Grid.CDB:21)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.G
rid$NotifyStartup.onReceived(Grid.CDB:3)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.G
rid.onMessage(Grid.CDB:34)
at com.tangosol.coherence.component.util.daemon.queueProcessor.service.G
rid.onNotify(Grid.CDB:33)
at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
at java.lang.Thread.run(Unknown Source)
C:\OracleCoherence\coherence\bin>
And the Cache-Server content is
@echo off
@rem This will start a cache server
setlocal
:config
@rem specify the Coherence installation directory
set coherence_home=%~dp0\..
@rem specify the JVM heap size
set memory=512m
set java_home=
:start
if not exist "%coherence_home%\lib\coherence.jar" goto instructions
:launch
if "%1"=="-jmx" (
set jmxproperties=-Dcom.sun.management.jmxremote -Dtangosol.coherence.management=all -Dtangosol.coherence.management.remote=true
shift
set java_opts=-Xms%memory% -Xmx%memory% %jmxproperties%
java -server -showversion -Dtangosol.coherence.ttl=0 -Dtangosol.coherence.localhost=127.0.0.1 -Dtangosol.pof.enabled=true -
Dtangosol.coherence.cacheconfig=C:\OracleCoherence\coherence\lib\cache-config.xml -Dtangosol.pof.config=C:\OracleCoherence\coherence\lib\Mypofconfig.xml
%java_opts% -cp "%coherence_home%\lib\coherence.jar" com.tangosol.net.DefaultCacheServer %1
goto exit
:instructions
echo Usage:
echo ^<coherence_home^>\bin\cache-server.cmd
goto exit
:exit
endlocal
@echo on -
Revision: 17446
Revision: 17446
Author: [email protected]
Date: 2010-08-23 11:58:36 -0700 (Mon, 23 Aug 2010)
Log Message:
make loading of the configuration xml part of the process.
Modified Paths:
osmf/branches/weiz-neon-sprint3-prototype/apps/samples/framework/DrmOSMFPlayer/DrmOSMFPla yer.mxml
osmf/branches/weiz-neon-sprint3-prototype/framework/OSMF/.flexLibProperties
osmf/branches/weiz-neon-sprint3-prototype/framework/OSMF/org/osmf/configuration/Configura tionService.as
Added Paths:
osmf/branches/weiz-neon-sprint3-prototype/framework/OSMF/org/osmf/events/ConfigurationEve nt.asMy log4j.xml looks like
<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE log4j:configuration SYSTEM "log4j.dtd">
<log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/">
<appender name="file" class="org.apache.log4j.RollingFileAppender">
<param name="File" value="C:\\niablogs\\niab3.log"/>
<param name="MaxFileSize" value="1MB"/>
<param name="append" value="true"/>
<param name="MaxBackupIndex" value="5"/>
<layout class="org.apache.log4j.PatternLayout">
<param name="ConversionPattern" value="(%-25.25F:%4L) %d{yyyy MM dd HH:mm:ss:SSS} [%-30.30t] %5p - %m %n" />
</layout>
</appender>
<logger name="in.co" additivity="false">
<level value="all" />
<appender-ref ref="file" />
</logger>
</log4j:configuration>It is working fine if I keep log4j.xml under WEB-INF/classes.If I keep my log4j.xml is outside of the application then it is not working.As per ApacheLog4j Specification we need to set the xml file path to the log4j.configuration system property.Even I tried putting like the following in my Listener class System.setProperty("log4j.configuration", "C:/kondal/log4j.xml");.
Any Ideas please?
Thanks
kondal -
Problem with ACL in CSS-to-CSS redundancy configuration
I have two CSSes - first is master, second is backup. When I enable ACL on master CSS, it can't see more the backup CSS. My first rule is to allow all traffic between both CSSes. I have CSS 11050 with 4.10 Build 10.
Here is a part of my config:
--- begin ---------------------------------------------------
!************************* INTERFACE *************************
interface e8
bridge vlan 254
description "css1 <-> css2 (net 192.168.254.0/30)"
!************************** CIRCUIT **************************
circuit VLAN254
ip address 192.168.254.1 255.255.255.252
redundancy-protocol
!**************************** NQL ****************************
nql n_csw_to_csw
ip address 192.168.254.1 255.255.255.255
ip address 192.168.254.2 255.255.255.255
!**************************** ACL ****************************
acl 1
clause 1 bypass any nql n_csw_to_csw destination nql n_csw_to_csw
apply circuit-(VLAN254)
--- end ---------------------------------------------------
Where is the problem? Is it a bug in my current version or an error in my configuration?
Thanks
Thomas Kukolat first step read http://www.cisco.com/warp/customer/117/css_packet_trace.html
and trace your unworking configuration
if you give flow option 0xffffff you should see why ACL didn't pass app traffic..
second idea is to use normaln acls w/o nql....
with permit keyword...
share expirience here again 8-) -
CCMS configuration question - more than one sapccmsr agent on one server
Hello all,
this might be a newbie question, please excuse:
We have several SAP systems installed on AIX in several LPARs. SAP aplication server and SAP database is always located in different LPARs, but one LPAR can share application server of several SAP systems or databases of several SAP systems.
So I want to configure SAPOSCOL and CCMS-Agents (sapccmsr) on our databse LPARS. SAPOSCOL is running - no problem so far. Due to the circumstance that we have DBs for SAP systems with kernel 4.6d, 6.40 (nw2004), 7.00 (nw2004s) I want to use two different CCMS-Agents (Version 6.40 non-unicode to connect to SAP 4.6d and 6.40 + Version 7.00 unicode to connect to SAP 7.00).
AFAIK only one of these can use shared memory segment #99 (default) - the other one has to be configured to use another one (e.g. #98) but I don't know how (could'nt find any hints on OSS + Online Help + CCMs-Agent manual).
Any help would be appreciated
regards
Christian
Edited by: Christian Mika on Mar 6, 2008 11:30 AMHello,
has really no one ever had this kind of problem? Do you all use either one (e.g. windows) server for one application (e.g. SAP application or database) or the same server for application and database? Or don't you use virtual hostnames (aliases) for your servers, so that in all mentioned cases one CCMS-Agent on one server would fit your requirements? I could hardly believe that!
kind regards
Christian -
Load Balancing / CF Edition Question
Hi,
I know very little about load-balancing, so please forgive
the beginner question.
If I don't plan on using ColdFusion's ClusterCATS load
balancing software
solution for multiple web servers, but instead am going to
use a hardware
load balancing solution instead, can I get away with
purchasing the ColdFusion
Standard Edition for each server(?) or would I have to
purchase the ColdFusion
Enterprise Edition for each server to make things work?
Multiple purchases of Enterprise costs so much, I just want
to save money if possible.
Thanks in advance,
JoeJoe_Krako wrote:
>
> I know very little about load-balancing, so please
forgive the beginner
> question.
>
> If I don't plan on using ColdFusion's ClusterCATS load
balancing software
> solution for multiple web servers, but instead am going
to use a hardware
> load balancing solution instead, can I get away with
purchasing the ColdFusion
> Standard Edition for each server(?)
Yes. You will have to configure your load balancer to use
sticky
sessions if you want to use session variables and you will
not have some
of the scalability features of CF Enterprise, but it will
work.
I don't see any requirements to buy Enterprise edition
licenses in the
EULA, but check that for yourself:
http://www.adobe.com/products/eula/server/
Jochem
Jochem van Dieten
Adobe Community Expert for ColdFusion -
Share project configuration with team and Build Path?
Question 1:
Is it possible to share the Flex Builder project settings (Flex Build Path, etc.) with other users? It would be nice to be able to do this so all developers on a team do not have to follow a step-by-step procedure for setting up a project. For example in a Java Project you can simply check-in the project and classpath files and new developers don't have to spend anytime setting up their ide. I took a look at what flex is doing and it appears it saves this information (at least some of it) in the plugins directory. I can't find exactly where but it seems very odd.
Question 2:
When setting the Flex build path is there a way to set-up a SWC Folder that recursively adds SWC's in sub-directories? We pull in Flex libs from artifactory which places them into an un-flattened structure. Since all the libs are in sub-dirs they are not found. We have to manually add each lib individually?
Any ideas on either of these questions? Thank you.Re: Question 1.
Do you mean check into SVN or similar? Anyway, it seems Flash builder is saving some configuration about build paths etc. in the .actionScriptProperties file, which is found in the root folder of a project. Another option might be to export the project as a Flash Builder project by right-clicking it in the package explorer and choosing Export?
Re: Question 2.
Afaik the IDE doesn't support recursively going through a folder to find all swc:s. You could probably solve this using an ANT script. E.g. the following script recursively copies all files with .swc extension from the libraries (AS3) folder to the libs folder of a project.
<project name="swcTest" default="copy.swc" basedir=".">
<property name="swc.src" value="${basedir}/../../libraries (AS3)/" />
<property name="swc.dest" value="${basedir}/libs/" />
<target name="copy.swc" description="Recursively copies swc files from source to target folder">
<echo message="${swc.src}" />
<echo message="${swc.dest}" />
<copy todir="${swc.dest}" failonerror="true" flatten="true">
<fileset dir="${swc.src}"
includes="**/*.swc"
/>
</copy>
</target>
</project> -
DownStream configuration questions
Hello,
I've some questions about the GG downstream configuration.
When a downstream server is configured, the extraction process must make two connections:
1.to the source database
2. To the Downstream database
I understand that the connection to the source database , is to translate the OBject_id found in the redo stream or other object references database.
Is correct this ?
I noticed that starts at least 4 sessions with the user specified in the connection.
Besides access to metadata could also access information that can not be generated from the redo stream and have to perform fetch operations, doing SELECTs statements.
I've observed this at down stream alert_log:
LOGMINER: Begin mining logfile during dictionary load for session 3 thread 1 sequence 42, /u02/oradata/dwnstr/standby/1_42_869313745.dbf
This operation has some relatioship with the source dictionary Access.
Many Thanks
ArturoIt's actually a bit of a shame to have that fast 1TB as your system drive, as the ideal is keep the system drive clean, except for OS and programs. Still, it's fast, and that counts.
My workstation is set up this way:
C:\ OS and programs with part of Page File
D:\ Media and other part of Page File
E:\ Projects and Scratch Disks in Project folder structure
F:\ Exports Video
G:\ Audio files
H:\ Media storage
Gigabit NAS media archival storage. I bring these Assets onto D:\ as copies, so I am never working with originals - other than Capture files from tape, but I archive my tapes, in case I have to recapture
I also have 2 multi-drives of different brands and a couple dozen 2TB FW-800 externals for archiving Projects, or for transporting Project between the workstation and my laptop.
I tested my Page File and found the best results on the workstation with it fixed and split. The next best was fixed and on D:\. It differs machine to machine, as it's better fixed and on D:\ on my laptop with 3 identical 200GB SATA II's. Given your C:\'s speed and size, I'd experiment with it fixed and on C:\ to see what performance increase, or hit, you take.
Good luck,
Hunt -
Configurator question (Connectors)
Hi,
I am not sure this is the appropriate forum to post this question, but I couldn't find another one.
My question is, when loading a model with a connector, it loads the target item as "inactive", unless i explicitly load the target as well. Is there a way to load the target item, maybe with a functional companion, without having the user explicitly select both the main and the target items into the configuration?
Thanks in advance for your help,
PedroHi,
yes, let me be more clear... our model is like this:
TOP
|-- Item 1
|[][]|-- Item 1.1
|
|-- Item 2
Item 2 has a Connector feature which connects to Item 1.1
Because Item 1 and Item 2 are both optional and IB Trackable, I can add them to a configuration (we are using Quoting) from Installed Base idependently, which means I can configure only one of them. But, when Item 2 is connected to Item 1.1, if I add Item 1 to the configuration Item 2 is also displayed in the UI (even though it is not really in the configuration) all greyed-out (read-only).
My question is, if I can add an item from Installed Base into a configuration on "runtime" (maybe using a functional companion), meaning, after the configuration is open. I want to allow the user to add only Item 1 to the quote but have Item 2 available for writing without the user having to add it explicitly to the quote.
Thanks in advance,
Pedro
Maybe you are looking for
-
External MIDI device, can't get more than one sound, Logic Express 9 & Korg N364
Hi, I just beginning to use Logic. I trying to get Logic Express 9 to work with and use my Korg N364 keyboard/workstation as an external MIDI device. I can only get one sound at a time even though I create multiple tracks and assign different program
-
Hi I am trying to create a view. Chosing a sap table fo ex: likp and a customized table activated which is having a customized field with curr reference field as waerk of vbak (reference table). Now while i am activating the view i have an error : t
-
G4 Will not Boot Up after Quick Time 7.1.6 Upgrade
I upgraded the most recent Quicktime update last night and opted for System Shutdown afterward. This morning the machine will not boot up. It goes to the Apple and the swirling circle for up to a half hour - still no luck. Any ideas? iMac G4 Mac OS
-
Insert record page - works in Safari, but not IE / Firefox
I've been working on a form where users can post an enquiry to a database. Thought it was all starting to come together, but I noticed at another client's today that it didn't work in IE. In Safari you fill in the form, hit the Post Enquiry button an
-
VBA script for custom Outlook 2010 calendar "first day of week"
I need Outlook 2010 to use "week starting with the Saturday before Jan 1" as "first week of year" to match our company payroll calendar. (For example, Saturday 12/26/2015-Sunday 1/3/2016 would be "first week of 2016".) Is there a way to do this in a