Kernel configuration Question
I have some questions on proper kernel configurations on a Sun SPARC server that we will be setting up here next month.
We are getting a Sun M5000 server.
We are going to installing 9i, 10g and 11g homes on this server to run our instances.
My main question is, with multiple versions of Oracle DB running on this server, what is the proper method for configuring the kernel parameters? I've poked around on metalink, but I cannot seem to find a definite guide on how to setup the parameters.
Anyone have any suggestions or recommendations?
I appreciate it.
_Jason
The kernel configuration requirement is per server basis, Or per project basis on new Solaris.
Just make sure your setting satisfy prerequisite set by Oracle document, since you have 3 different versions, whichever higher.
http://download.oracle.com/docs/cd/B19306_01/install.102/b15704/pre_install.htm#sthref258
Similar Messages
-
RH5.2(2.0.36) O8.0.5 Kernel Configuration Questions
Looking for wisdom, insight, and a step-by-step procedure:
I have the CD-ROM from Oracle8 release 8.0.5 Standard Edition for
Linux, and I'm running Kernel 2.0.36 in a Red Hat 5.2
environment.
I'm working through the Installation Guide (referred to as the
"manual" in the following text). I'm in no hurry, and I would
rather get it right (even robust) the first time than get it
running as soon as possible. So, I'm trying to follow the
instructions. Here's what I've come up with so far; please post
your comments as replies here on technet.oracle.com:
1. The first instruction in the manual under Configure LINUX
Kernel for Oracle says "Use the ipcs command to obtain a list of
the system's current shared memory and semaphore segments, and
their identification number and owner." Specifically, what does
this mean that I am looking for?
# ipcs
------ Shared Memory Segments --------
key shmid owner perms bytes nattch
status
0x00000000 0 nobody 600 52228 11
dest
------ Semaphore Arrays --------
key semid owner perms nsems status
------ Message Queues --------
key msqid owner perms used-bytes messages
0x00000000 0 root 700 0 0
I did not find the man page on ipcs particularly enlightening.
2. In the next section, entitled, "Set the kernel parameters
corresponding to ...", I have gathered (though I wish the manual
said so explicitly) that these need to be set in the source for
the kernel and that the kernel would then need be recompiled.
(OK, end of whining, now onto the nitty gritty.) I have the
Shared Memory and Semaphore parameters in Table 2-2 and want to
plug them into the source.
First change needed, in file /usr/src/linux/include/linux/sem.h:
change #define SEMMSL 32 to #define SEMMSL 128
(I'll give a semaphore id 128 identifiers so that I can run 128
oracle processes. The max is 512 according to sem.h, so I took
the goemetric mean, since this is intended to be a test bed.)
I see no other changes needed in this file, and few implications
that I'm going to cause major problems with this change. On to
the next.
Second changes needed, in file
/usr/src/linux/include/asm-i368/shmparam.h:
Now, here's the part upon which I am really stumped.
change #define SHMMAX 0x2000000 to #define SHMMAX 0xFFFFFFFF
(I'm pretty sure that decimal 4294967295 is hex FFFFFFFF, though
the latter is a lot easier to write correctly!)
I looked up a few lines and read
SHMMAX <= (PAGE_SIZE << SHMIDX_BITS
Currently, PAGE_SIZE is set in
/usr/src/linux/include/asm-i368/page.h
to 1UL<<12 (0x1000).
However, SHMIDX_BITS + SHMID_BITS <= 24 from a few lines
above. And, from a few more lines above, SHMIDX_BITS is set to
15 and SHMID_BITS is set to 7. So, either I can tweak the two
BITS values or the PAGE_SIZE, right? (THIS IS NOT A RHETORICAL
QUESTION. I WOULD LIKE YOUR INPUT!).
However, to get SHMMAX to 2^32 with a page size of 2^12, I would
need SHMIDX_BITS set to 20, which would leave only 4 bits for
SHMID_BITS, which woud lin turn put SHMMNI at 16, which is too
low for the value recommended in the oracle manual, 100 (I wonder
if that is really 100 or if it is supposed to be 0x100...).
ANYway, It looks like I cannot do the whole job with SHM_BITS,
or even most of it.
This seems to lead me to adjusting page size, from 2^12 to 2^15.
To do this, I'll need to alter /usr/src/linux/mm/kmalloc.c in
order not to break the PAGESIZE-dependent structs. Does this make
sense?
Has anyone (even at Oracle...) done this level of analysis when
installing Oracle8 on Linux?
I saw that a few people have edited files, but few have mentioned
which files, and I have seen no details at all on the changes
made. Does this resemble the reasoning that they took?
Do I really have to change PAGE_SIZE to pull this off?
Do the instructions need to be changed? I think that they could
be a little more explicit, IMHO.
If you have read this far, thanks very much! I look forward to
your feedback.
nullI am really sorry that probably will not answer your questions,
but here is my advice:
use $ORACLE_HOME/bin/tstshm utility to find out
"Total shared memory attached"
if it is greater that the SGA size you expect then do nothing.
If it is not then change some of the parameters in the kernel.
If you need more information and you have a Technical Support
Contract you can call an analyst and get more datailed
information.
I'm sure at Oracle there is huge amount of resources about that.
Regards: Michael Daskaloff
P.S. Have in Mind that the default SHM parameters in RedHat 5.x
are good for a initial installation.
Art Eschenlauer (guest) wrote:
: Looking for wisdom, insight, and a step-by-step procedure:
: I have the CD-ROM from Oracle8 release 8.0.5 Standard Edition
for
: Linux, and I'm running Kernel 2.0.36 in a Red Hat 5.2
: environment.
: I'm working through the Installation Guide (referred to as the
: "manual" in the following text). I'm in no hurry, and I would
: rather get it right (even robust) the first time than get it
: running as soon as possible. So, I'm trying to follow the
: instructions. Here's what I've come up with so far; please
post
: your comments as replies here on technet.oracle.com:
: 1. The first instruction in the manual under Configure LINUX
: Kernel for Oracle says "Use the ipcs command to obtain a list
of
: the system's current shared memory and semaphore segments, and
: their identification number and owner." Specifically, what
does
: this mean that I am looking for?
: # ipcs
: ------ Shared Memory Segments --------
: key shmid owner perms bytes nattch
: status
: 0x00000000 0 nobody 600 52228 11
: dest
: ------ Semaphore Arrays --------
: key semid owner perms nsems status
: ------ Message Queues --------
: key msqid owner perms used-bytes messages
: 0x00000000 0 root 700 0 0
: I did not find the man page on ipcs particularly enlightening.
: 2. In the next section, entitled, "Set the kernel parameters
: corresponding to ...", I have gathered (though I wish the
manual
: said so explicitly) that these need to be set in the source for
: the kernel and that the kernel would then need be recompiled.
: (OK, end of whining, now onto the nitty gritty.) I have the
: Shared Memory and Semaphore parameters in Table 2-2 and want to
: plug them into the source.
: First change needed, in file
/usr/src/linux/include/linux/sem.h:
: change #define SEMMSL 32 to #define SEMMSL 128
: (I'll give a semaphore id 128 identifiers so that I can run 128
: oracle processes. The max is 512 according to sem.h, so I took
: the goemetric mean, since this is intended to be a test bed.)
: I see no other changes needed in this file, and few
implications
: that I'm going to cause major problems with this change. On to
: the next.
: Second changes needed, in file
: /usr/src/linux/include/asm-i368/shmparam.h:
: Now, here's the part upon which I am really stumped.
: change #define SHMMAX 0x2000000 to #define SHMMAX 0xFFFFFFFF
: (I'm pretty sure that decimal 4294967295 is hex FFFFFFFF,
though
: the latter is a lot easier to write correctly!)
: I looked up a few lines and read
: SHMMAX <= (PAGE_SIZE << SHMIDX_BITS
: Currently, PAGE_SIZE is set in
: /usr/src/linux/include/asm-i368/page.h
: to 1UL<<12 (0x1000).
: However, SHMIDX_BITS + SHMID_BITS <= 24 from a few lines
: above. And, from a few more lines above, SHMIDX_BITS is set
to
: 15 and SHMID_BITS is set to 7. So, either I can tweak the two
: BITS values or the PAGE_SIZE, right? (THIS IS NOT A RHETORICAL
: QUESTION. I WOULD LIKE YOUR INPUT!).
: However, to get SHMMAX to 2^32 with a page size of 2^12, I
would
: need SHMIDX_BITS set to 20, which would leave only 4 bits for
: SHMID_BITS, which woud lin turn put SHMMNI at 16, which is
too
: low for the value recommended in the oracle manual, 100 (I
wonder
: if that is really 100 or if it is supposed to be 0x100...).
: ANYway, It looks like I cannot do the whole job with
SHM_BITS,
: or even most of it.
: This seems to lead me to adjusting page size, from 2^12 to
2^15.
: To do this, I'll need to alter /usr/src/linux/mm/kmalloc.c in
: order not to break the PAGESIZE-dependent structs. Does this
make
: sense?
: Has anyone (even at Oracle...) done this level of analysis when
: installing Oracle8 on Linux?
: I saw that a few people have edited files, but few have
mentioned
: which files, and I have seen no details at all on the changes
: made. Does this resemble the reasoning that they took?
: Do I really have to change PAGE_SIZE to pull this off?
: Do the instructions need to be changed? I think that they could
: be a little more explicit, IMHO.
: If you have read this far, thanks very much! I look forward to
: your feedback.
null -
[SOLVED] Arch Kernel Configuration
Hi, I'd like to compile recompile the Kernel but I would like to know wich are the default configuration settings that the compiled kernel26 of the core repository has. I would like to tweak some things but starting from that configuration not from the defaul Kernel configuration found in www.kernel.org.
I would like to know it ABS could be used to do it, or if I can get a configuration file somewhere and use it with ABS.
Thank you very much.
Last edited by KaoDome (2009-01-14 00:24:56)KaoDome wrote:
Ok thank you very much! I've found the configuration within the ABS package. The I can edit it running make menuconfig (or the GUI configurator) and then save the edited file as config (in my case, if it was a x84_64 it would be config.x86_64.
I had to change the checksum for that file in the PKGBUILD file in order to it to work.
It's now compiling!
Another question... Does it uses the CFLAGS defined in /etc/makepkg.conf when compiling the Kernel? I hope so...
Once more, thank you very much for all!
You can also use the ABS PKGBUILD, and add the "make menuconfig" option to be able to change the -ARCH settings.... then when you press exit it will save (and use) your newly created .config.
# load configuration
make menuconfig
# build!
So it will load the default -ARCH generic setup and then you can modify to your hardware specs.
My AMD64 300HZ -Os Low-Latency kernel:
$ uname -osrpmi
Linux 2.6.28-ARCHtestAMD x86_64 AMD Turion(tm) 64 X2 Mobile Technology TL-60 AuthenticAMD GNU/Linux
Last edited by methuselah (2009-01-14 18:58:30) -
[solved] Kernel configuration for Heavy Calculations Servers
Hello all,
I've been using Arch on half of the dual xeon workstations of the research lab i work to (these are the only ones under my supervision) and so far they have been continuously hailed for their greater performance compared to the other machines (same configuration) running fedora.
I did some minor tweaks on makepkg.conf (and using sugested compilation flags of the software i use) and on the bios to improve memory latency but i suppose i could squeeze even more juice from those boxes if i choose the right kernel options.
Finally my questions are:
- regarding the kernel, what options could i use to improve performance (i suppose using a home-pc kernel configuration is not the best option)?
- regarding the arch linux, what else can i do?
Usage:
- only to perform molecular modeling calculations (both molecular dynamics and quantum dynamics )
- data transfer using "scp"
Below are some important info about the hardware and config files:
-lspci
00:00.0 Host bridge: Intel Corporation 5000V Chipset Memory Controller Hub (rev b1)
00:02.0 PCI bridge: Intel Corporation 5000 Series Chipset PCI Express x8 Port 2-3 (rev b1)
00:03.0 PCI bridge: Intel Corporation 5000 Series Chipset PCI Express x4 Port 3 (rev b1)
00:08.0 System peripheral: Intel Corporation 5000 Series Chipset DMA Engine (rev b1)
00:10.0 Host bridge: Intel Corporation 5000 Series Chipset FSB Registers (rev b1)
00:10.1 Host bridge: Intel Corporation 5000 Series Chipset FSB Registers (rev b1)
00:10.2 Host bridge: Intel Corporation 5000 Series Chipset FSB Registers (rev b1)
00:11.0 Host bridge: Intel Corporation 5000 Series Chipset Reserved Registers (rev b1)
00:13.0 Host bridge: Intel Corporation 5000 Series Chipset Reserved Registers (rev b1)
00:15.0 Host bridge: Intel Corporation 5000 Series Chipset FBD Registers (rev b1)
00:16.0 Host bridge: Intel Corporation 5000 Series Chipset FBD Registers (rev b1)
00:1c.0 PCI bridge: Intel Corporation 631xESB/632xESB/3100 Chipset PCI Express Root Port 1 (rev 09)
00:1d.0 USB Controller: Intel Corporation 631xESB/632xESB/3100 Chipset UHCI USB Controller #1 (rev 09)
00:1d.1 USB Controller: Intel Corporation 631xESB/632xESB/3100 Chipset UHCI USB Controller #2 (rev 09)
00:1d.2 USB Controller: Intel Corporation 631xESB/632xESB/3100 Chipset UHCI USB Controller #3 (rev 09)
00:1d.3 USB Controller: Intel Corporation 631xESB/632xESB/3100 Chipset UHCI USB Controller #4 (rev 09)
00:1d.7 USB Controller: Intel Corporation 631xESB/632xESB/3100 Chipset EHCI USB2 Controller (rev 09)
00:1e.0 PCI bridge: Intel Corporation 82801 PCI Bridge (rev d9)
00:1f.0 ISA bridge: Intel Corporation 631xESB/632xESB/3100 Chipset LPC Interface Controller (rev 09)
00:1f.1 IDE interface: Intel Corporation 631xESB/632xESB IDE Controller (rev 09)
00:1f.2 IDE interface: Intel Corporation 631xESB/632xESB/3100 Chipset SATA IDE Controller (rev 09)
00:1f.3 SMBus: Intel Corporation 631xESB/632xESB/3100 Chipset SMBus Controller (rev 09)
01:00.0 PCI bridge: Intel Corporation 6311ESB/6321ESB PCI Express Upstream Port (rev 01)
01:00.3 PCI bridge: Intel Corporation 6311ESB/6321ESB PCI Express to PCI-X Bridge (rev 01)
02:00.0 PCI bridge: Intel Corporation 6311ESB/6321ESB PCI Express Downstream Port E1 (rev 01)
02:01.0 PCI bridge: Intel Corporation 6311ESB/6321ESB PCI Express Downstream Port E2 (rev 01)
02:02.0 PCI bridge: Intel Corporation 6311ESB/6321ESB PCI Express Downstream Port E3 (rev 01)
05:00.0 Ethernet controller: Intel Corporation 80003ES2LAN Gigabit Ethernet Controller (Copper) (rev 01)
05:00.1 Ethernet controller: Intel Corporation 80003ES2LAN Gigabit Ethernet Controller (Copper) (rev 01)
09:0c.0 VGA compatible controller: ATI Technologies Inc ES1000 (rev 02)
-cat /proc/cpuinfo - it's the same for the eight "processors"
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 23
model name : Intel(R) Xeon(R) CPU E5410 @ 2.33GHz
stepping : 6
cpu MHz : 2331.000
cache size : 6144 KB
physical id : 0
siblings : 4
core id : 0
cpu cores : 4
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 10
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 lahf_lm tpr_shadow vnmi flexpriority
bogomips : 4656.91
clflush size : 64
cache_alignment : 64
address sizes : 38 bits physical, 48 bits virtual
power management:
-/etc/rc.conf
MOD_AUTOLOAD="yes"
MODULES=(acpi-cpufreq )
USELVM="no"
DAEMONS=(syslog-ng hal sensorsd sshd network netfs crond)
-free -m
total used free shared buffers cached
Mem: 7968 3094 4874 0 75 2784
-/+ buffers/cache: 233 7734
Swap: 19077 0 19077
- modifications i made to the /etc/makepkg.conf
# ARCHITECTURE, COMPILE FLAGS
CARCH="x86_64"
CHOST="x86_64-unknown-linux-gnu"
#-- Exclusive: will only run on x86_64
# -march (or -mcpu) builds exclusively for an architecture
# -mtune optimizes for an architecture, but builds for whole processor family
CFLAGS="-march=native -mtune=native -03 -pipe"
CXXFLAGS="-march=native -mtune=native -03 -pipe"
#LDFLAGS=""
#-- Make Flags: change this for DistCC/SMP systems
MAKEFLAGS="-j9"
Thanks in advance
Last edited by Duca (2009-09-28 04:30:04)graysky wrote:Your xfers are via scp... are the files large or numerous small files? Have you the ability to use jumbo frames on your network if the network is a GigLAN backbone? If so, you'll have to test to see if a non-standard 4000 or 9000 mtu size improves xfers.
thanks for your rapid response and i googled about it, but unfortunatelly i dont have a gigLan in the lab but usually all file transfered exceed 5GB of size.
Would the scheduler have a significant impact on performance? The servers are used individually (not sharing load like in a bewoulf cluster)
The disk I/O operations are mostly about writes into the output files (a few number of bytes per write), would reiserfs be superior to ext3 ?
PS.: i'm feeling really stupid for not reading more about the kernel26-lts for it has all the major options i was thinking about.
Last edited by Duca (2009-09-27 18:26:34) -
SAP-JEE, SAP_BUILDT, and SAP_JTECHS and Dev Configuration questions
Hi experts,
I am configuring NWDI for our environment and have a few questions that I'm trying to get my arms around.
I've read we need to check-in SAP-JEE, SAP_BUILDT, and SAP_JTECHS as required components, but I'm confused on the whole check-in vs. import thing.
I placed the 3 files in the correct OS directory and checked them in via the check-in tab on CMS. Next, the files show up in the import queue for the DEV tab. My questions are what do I do next?
1. Do I import them into DEV? If so, what is this actually doing? Is it importing into the actual runtime system (i.e. DEV checkbox and parameters as defined in the landscape configurator for this track)? Or is just importing the file into the DEV buildspace of NWDI system?
2. Same question goes for the Consolidation tab. Do I import them in here as well?
3. Do I need to import them into the QA and Prod systems too? Or do I remove them from the queue?
Development Configuration questions ***
4. When I download the development configuration, I can select DEV or CON workspace. What is the difference? Does DEV point to the sandbox (or central development) runtime system and CONS points to the configuration runtime system as defined in the landscape configurator? Or is this the DEV an CON workspace/buildspace of the NWDI sytem.
5. Does the selection here dictate the starting point for the development? What is an example scenarios when I would choose DEV vs. CON?
6. I have heard about the concept of a maintenance track and a development track. What is the difference and how do they differ from a setup perspective? When would a Developer pick one over the over?
Thanks for any advice
-DaveHi David,
"Check-In" makes SCA known to CMS, "import" will import the content of the SCAs into CBS/DTR.
1. Yes. For these three SCAs specifically (they only contain buildarchives, no sources, no deployarchives) the build archives are imported into the dev buildspace on CBS. If the SCAs contain deployarchives and you have a runtime system configured for the dev system then those deployarchives should get deployed onto the runtime system.
2. Have you seen /people/marion.schlotte/blog/2006/03/30/best-practices-for-nwdi-track-design-for-ongoing-development ? Sooner or later you will want to.
3. Should be answered indirectly.
4. Dev/Cons correspond to the Dev/Consolidation system in CMS. For each developed SC you have 2 systems with 2 workspaces in DTR for each (inactive/active)
5. You should use dev. I would only use cons for corrections if they can't be done in dev and transported. Note that you will get conflicts in DTR if you do parallel changes in dev and cons.
6. See link in No.2 ?
Regards,
Marc -
Sorry but it has been a while. After reading all the kernel threads, I still have a few kernel patch questions.
1) Sequencing - Does it matter what sequence I apply all the patches? Alphabetic order? Date sequence? Always finish with DW?
2) Versioning info - some commands have a -v parameter to tell what version you currently have. Some don't. Is there some way to find out versioning info for all potential patches?
3) Marketplace - I see there is a "complete" kernel on the marketplace now (albeit in two parts). However it's never current. It looks like the hot setup to download that and patch that and then slip that in. Does everyone agree with that assessment?
4) Yes, I know the truly hot setup would be to contract with Volker to get s fully patched CD but since we were stuck on 4.0B for so long, I haven't had a chance to keep my skills up to date. This looks like a good opportunity.
Thanks,
RickHi my friend
I assume you're referring to how to load new 700 kernel since you didn't mention. Here's the steps listed below, for details you could find in Note 912575 - iSeries: Using LODSAPKRN to load a 7.00 kernel
1. Download the latest SAPEXE.SAR, SAPEXEDB.SAR and IGSEXE.SAR from SWDC.
2. Create a directory in IFS and put 3 SAR files there, create a stream file called "parts" with content below:
SAPEXE.SAR,
SAPEXEDB.SAR,
IGSEXE.SAR,
P.S: I opt this way to patch IGS myself because it's more convenient, the formal IGS patching is introduced in
Note 937000 - iSeries: Installing and patching the IGS
3. Log on as QSECOFR or equivalent user (in 912575 it says SIDADM, which could lead to lack of authorization sometimes), and run command:
LODSAPKRN DEV(STMF) MNTPNT('<dir>') KRNLIB(<kernel library>) USERDEF(YES) LIST('<dir>/parts')
4. Stop SAP system and then remove old kernel:
RMVSAP SID(<SID>) DLTKRNLIB(*NO)
5. Change the library list (EDTLIBL) and replace the old kernel library with the new 7.00 kernel library.
6. Activate the new 7.00 kernel:
APYSAP TYPE(*KERNEL) SID(<SID>) DSTLIB(<new kernel library>)
7. Delete all SQL packages: DLTR3PKG SID(<SID>)
then start SAP system
Regards, -
Configuration question on css11506
Hi
One of our vip with 4 local servers, currently has https. the http is redirected to https.
Now, my client has problem which a seriel directories need use http, not https. some thing like. quistion:
1. If there is any possible, I can configure the vip to filter the special directories and let them to use http not https. and rest pages and directories redirect to https?
2. If not, I can make another vip to use same local servers, but, is possible to only limited to special directories? and with wild code? some like the directories are partially wild coded, something like, http://web.domain/casedir*/casenumber?
3. if not on both option, is any way I can fix this problem?
Any comments will be appreciated
Thanks in advance
JulieI run my Tangosol cluster with 12 nodes on 3
machines(each machine with 4 cache server nodes). I
have 2 important configuration questions. Appreciate
if you can answer them ASAP.
- My requirement is that I need only 10000 objects to
be in cluster so that the resources can be freed upon
when other caches are loaded. I configured the
<high-units> to be 10000 but I am not sure if this is
per node or for the whole cluster. I see that the
total number of objects in the cluster goes till
15800 objects even when I configured for the 10K as
high-units (there is some free memory on servers in
this case). Can you please explain this?
It is per backing map, which is practically per node in case of distributed caches.
- Is there an easy way to know the memory stats of
the cluster? The memory command on the cluster
doesn't seem to be giving me the correct stats. Is
there any other utility that I can use?
Yes, you can get this and quite a number of other information via JMX. Please check this wiki page for more information.
I started all the nodes with the same configuration
as below. Can you please answer the above questions
ASAP?
<distributed-scheme>
<scheme-name>TestScheme</scheme-name>
<service-name>DistributedCache</service-name>
<backing-map-scheme>
<local-scheme>
<high-units>10000</high-units>
<eviction-policy>LRU</eviction-policy>
<expiry-delay>1d</expiry-delay>
<flush-delay>1h</flush-delay>
</local-scheme>
</backing-map-scheme>
</distributed-scheme>
Thanks
RaviBest regards,
Robert -
Configuration Question on local-scheme and high-units
I run my Tangosol cluster with 12 nodes on 3 machines(each machine with 4 cache server nodes). I have 2 important configuration questions. Appreciate if you can answer them ASAP.
- My requirement is that I need only 10000 objects to be in cluster so that the resources can be freed upon when other caches are loaded. I configured the <high-units> to be 10000 but I am not sure if this is per node or for the whole cluster. I see that the total number of objects in the cluster goes till 15800 objects even when I configured for the 10K as high-units (there is some free memory on servers in this case). Can you please explain this?
- Is there an easy way to know the memory stats of the cluster? The memory command on the cluster doesn't seem to be giving me the correct stats. Is there any other utility that I can use?
I started all the nodes with the same configuration as below. Can you please answer the above questions ASAP?
<distributed-scheme>
<scheme-name>TestScheme</scheme-name>
<service-name>DistributedCache</service-name>
<backing-map-scheme>
<local-scheme>
<high-units>10000</high-units>
<eviction-policy>LRU</eviction-policy>
<expiry-delay>1d</expiry-delay>
<flush-delay>1h</flush-delay>
</local-scheme>
</backing-map-scheme>
</distributed-scheme>
Thanks
RaviI run my Tangosol cluster with 12 nodes on 3
machines(each machine with 4 cache server nodes). I
have 2 important configuration questions. Appreciate
if you can answer them ASAP.
- My requirement is that I need only 10000 objects to
be in cluster so that the resources can be freed upon
when other caches are loaded. I configured the
<high-units> to be 10000 but I am not sure if this is
per node or for the whole cluster. I see that the
total number of objects in the cluster goes till
15800 objects even when I configured for the 10K as
high-units (there is some free memory on servers in
this case). Can you please explain this?
It is per backing map, which is practically per node in case of distributed caches.
- Is there an easy way to know the memory stats of
the cluster? The memory command on the cluster
doesn't seem to be giving me the correct stats. Is
there any other utility that I can use?
Yes, you can get this and quite a number of other information via JMX. Please check this wiki page for more information.
I started all the nodes with the same configuration
as below. Can you please answer the above questions
ASAP?
<distributed-scheme>
<scheme-name>TestScheme</scheme-name>
<service-name>DistributedCache</service-name>
<backing-map-scheme>
<local-scheme>
<high-units>10000</high-units>
<eviction-policy>LRU</eviction-policy>
<expiry-delay>1d</expiry-delay>
<flush-delay>1h</flush-delay>
</local-scheme>
</backing-map-scheme>
</distributed-scheme>
Thanks
RaviBest regards,
Robert -
CCMS configuration question - more than one sapccmsr agent on one server
Hello all,
this might be a newbie question, please excuse:
We have several SAP systems installed on AIX in several LPARs. SAP aplication server and SAP database is always located in different LPARs, but one LPAR can share application server of several SAP systems or databases of several SAP systems.
So I want to configure SAPOSCOL and CCMS-Agents (sapccmsr) on our databse LPARS. SAPOSCOL is running - no problem so far. Due to the circumstance that we have DBs for SAP systems with kernel 4.6d, 6.40 (nw2004), 7.00 (nw2004s) I want to use two different CCMS-Agents (Version 6.40 non-unicode to connect to SAP 4.6d and 6.40 + Version 7.00 unicode to connect to SAP 7.00).
AFAIK only one of these can use shared memory segment #99 (default) - the other one has to be configured to use another one (e.g. #98) but I don't know how (could'nt find any hints on OSS + Online Help + CCMs-Agent manual).
Any help would be appreciated
regards
Christian
Edited by: Christian Mika on Mar 6, 2008 11:30 AMHello,
has really no one ever had this kind of problem? Do you all use either one (e.g. windows) server for one application (e.g. SAP application or database) or the same server for application and database? Or don't you use virtual hostnames (aliases) for your servers, so that in all mentioned cases one CCMS-Agent on one server would fit your requirements? I could hardly believe that!
kind regards
Christian -
Closed loop configuration question
I have a motor(with encoder feedback) attached to a linear actuator(with end limit switches).
The motor has a commercially bought servo drive for control.
The servo drive will accept either a step/direction (2 seperate TTL
digital pulse train inputs) or an analog -10 to 10vdc input for
control.
The purpose is to drive a linear actuator(continiously in and out) in
closed loop operation utilizing a ( (SV) Setpoint variable)value from a
file converted to a frequency to compare with an actual ( (PV) Position
variable) measured frequency.
I have created and experimented with individual vi's allows analog
control and digital pulse train control (thankfully with the help of
examples).
Before I pose my question, I would like to make the following
observations: It is my understanding that Closed loop control
means that I dont need to know an exact position at which to drive, but
constant comparision of PV and SV through PID applictation.
Without getting into any proprietery information I can say that the
constant positioning of the linear actuator will produce a latency of 2
to 3 seconds based on the time the actuator moves to a new position and
when the PV will change. While experimenting with the analog
input, i noticed imediate response to motor velocity, but after the
motor is stopped, position is not held in place. However, while
experimenting with the Digital pulse train input, I noticed that the
servo drive can only accept one command at one time; if, halfway
through a move, position error produces a response to move the linear
actuator in the opposite or different direction, the origional move
must finish first.
Can anyone recommend the proper configuration for the closed loop control i have described?
If I can make the system work with the servo drive/motor I plan to use
the simple (pci 6014) daq card with the Analog out, or utilize the
digital out.
If I cant get this to work, we do have a pxi with 7344 motion card(I
would like to exhaust all efforts to use the PCI 6014 card).
Depending on where I go from here, I planned to use the PID vi's for the loop control.
Thanks,
Wayne HilburnThanks for the reply
Jochen. I realize there is a built-in latency with windows but I
think the I/O control would be ok. A change in actuator position
will not result in an immediate change in process variable; Is
there a way to measure the latency or is it calculated? A
satisfactory reaction time could be from 1 to 1.5 sec.
Use of the PCI-6014 is to supply the control output to the servo
drive/amp, and not to drive the motor itself. As stated earlier,
while using the 6014 board, I have the choice of digital or analog
output.
Currently I am at a point where I must choose which configuration,
analog control or digital control(in the form of digital pulse train),
(i am inserting from first message)
While experimenting with the analog
input, i noticed imediate response to motor velocity, but after the
motor is stopped, position is not held in place. However, while
experimenting with the Digital pulse train input, I noticed that the
servo drive can only accept one command at one time; if, halfway
through a move, position error produces a response to move the linear
actuator in the opposite or different direction, the origional move
must finish first. .
I dont claim to understand all the limitations with the
specific boards, however, i am using an approach that is showing me the
characteristics(a couple are listed in the above paragraph) of
the hardware and software configurations.
So I am really back to my origional question; Which configuration
would be better for closed loop control, analog or digital pulse train?
Thanks,
Wayne Hilburn -
Multiple Oracle Configuration Question
We have a typical environment setup. I will explain it below:
Our application works in Online and Offline Mode. For Online mode we connect to Oracle 10g enterprise server and a Local instance of Access and In offline application totally works in Access.
Now we want to move away from Access and have Oracle PE instead just because we want to use stored procedure and same set of code for offline and online processing.
So a typical user machine will have a PE instance and a Oracle Client. Currently we use LDAP.ora for Configuring connections. Now i have few questions
1. How do we ensure that Oracle PE will work when we don't have network connection. Can we have like PE setup with Tnsnames.ORA
2. What can be the smallest possible package for PE.
3. Can I use one client to access both PE and Server databases.
Any help will be highly appreciated.
Thanks in advance.Assuming the "Xcopy installation" refers to using the Windows xcopy command, can you clarify what, exactly, you are installing via xcopy? Are you just using xcopy to copy the ODP.Net bits? Or are you trying to install the Oracle client via that approach?
If you are concerned about support, you would generally want to install everything via the Oracle Universal Installer (barring those very occasional components that don't use the OUI). Oracle generally only supports software installed via the installer because particularly on Windows, there are a number of registry entries that need to get created.
You can certainly do a custom install of the personal edition on the end user machines. There are a few required components that I believe have to be installed (that the installer will take care of). I assume your customization will take the form of a response file to the OUI in order to do a silent install?
Justin -
No wakelarm due to kernel configured with old "Enhanced RTC" options
Hi,
I am an Arch newbie building a mythtv box for the first time and it's going really well! Arch really let's me do what I want to do and doesn't hide stuff, thus letting me solve any problems quite easily. However, I have just encountered a bit of a snag so I need some advice.
I want to have the machine start up in time for scheduled recordings, and so I need to use the RTC wakeup function of the motherboard. The Mythtv wiki suggests using the new /sys/class/rtc/rtc0/wakealarm interface, but Arch (kernel 2.6.23.14, x86_64) seems to be configured in a way that makes it impossible, here's the relevant text from the Mythtv wiki (http://www.mythtv.org/wiki/index.php/ACPI_Wakeup):
Warning: The wakealarm interface is incompatible with the kernel's old "Enhanced Real Time Clock Support" and "Generic /dev/rtc emulation" options. If your kernel was built with these enabled your kernel log will contain messages such as
rtc_cmos: probe of 00:03 failed with error -16
The solution is to rebuild your kernel with the above two options excluded (find them under Drivers -> Character Devices) and the various RTC interfaces (found under Drivers -> Real Time Clock) included. From a .config point of view CONFIG_RTC and CONFIG_GEN_RTC must be unset and, at a minimum, RTC_INTF_SYSFS must be set.
I get that exact error in my dmesg, and as a result /sys/class/rtc/rtc0/wakealarm does not exist. And since /proc is apparently deprecated I can't use the old option either.
As far as I can see I have three options:
* Wait for official updated kernel from Arch team and hope it is fixed
* Compile my own kernel
* Use nvram-wakeup or something like that
Any suggestions on what to do? Other options? I would like to avoid recompiling the kernel if possible, but I guess that would be the quickest solution, right? I plan on reading up on ABS anyway
Should I file this as a bug?
Thanks/Johan
Edit: added link to Mythtv wiki
Last edited by infoball (2008-01-28 14:29:39)Configure the proper proxy server configuration using netsh.
http://technet.microsoft.com/en-us/library/cc731131(v=WS.10).aspx#BKMK_5
netsh set proxy proxy-server="http=myproxy;https=sproxy:88" bypass-list="*.contoso.com"
Ed Crowley MVP "There are seldom good technological solutions to behavioral problems."
I had seen elsewhere that messing with these settings may break the connection to the EMC and EMS.
If that happens, we will worse off than having an expiring certificate.
How do we ensure that this does not happen?
Can we just download the CRL file locally? -
Master iPad configurator question concerning cart syncing with different versions of iPads.
I have a question concerning configurator syncing.Can the master iPad be a different version of iPad than the other synced iPads? For instance can iPad 2 be the master iPad to a group of iPad Air's? The iPad 2 has some fewer capabilities than the Air, would some settings or restrictions be left off of the iPad Airs if they were set up this way? Thanks.
There is no such thing as 'master iPad'. If you're using Configurator or Profile Manager control of the setup is done from a Macintosh.
-
Setup/ Configuration Question
The network setup I'm using is a wireless G router, Belkin brand, and I have four WRE54G expanders throughout the warehouse. I don't have WEP turned on so I used the auto configuration on all of them. They all made connection and they all work. However, each one of them tends to lose connection from time to time. At least once every couple of weeks I have to reset one of them. When they stop working the red light comes on and I know the link is gone. I'll reset it and everything will be fine for a while. I thought I'd try to get into the web utility to check if any settings are off, however because it has been auto configured I'm not sure what the ip address is. It's not the default. We use a 10.x.x.x range and I've scanned the entire range and none of them show up on an IP. The only thing that does, besides the computers connecting, is the router. I've run the linksys setup utility and had it do a site survey but it keeps coming back saying that the site survey failed. I hate to take them all down from where they are mounted and physically connect them to check things, but I'm not sure it that's the problem. Any ideas would be appreciated. Thanks. If the question I am asking here has already been addressed, please point me to the related thread.
(Edited subject to keep page from stretching. Thanks!)
Message Edited by JOHNDOE_06 on 06-21-2007 08:57 AMStill kinda funny that the site survey fails. Are you using a Vista computer? Saw that using a XP computer solved someone's problem.
If you can't get the setup software to work you really need to find out what IPs were assigned to the REs to access the IE interface or hard wire them (v2 and 3). I know it is alot of work. I guess you have to assess how much of a hassle it is.
I don't know if the RE shows up on ipconfig/all. I guess it should since it has a unique IP address. Mines the default so I'll check and repost.
I'm wondering what the effect is of having 4 REs in relatively "close" proximity is? When mine loses connection (infrequently), the light turns red but immediately turns blue cuz it connects to router again (got only 1 RE). Would your's connect to the router or to another RE? If it connects to another RE, I guess you lose half the speed again. Interesting....
Also, other than the blue lights, do have any other indication that the REs are working, e.g. increased signal?
Message Edited by Luckydog on 06-21-2007 11:53 AM
Message Edited by Luckydog on 06-21-2007 12:02 PM
Message Edited by Luckydog on 06-21-2007 12:13 PM -
Oracle hardware and storage solution configuration questions
Hi all,
I am configuring hardware and the storage solution for a project and am hoping to have some questions answered about using Oracle as the storage solution.
The current setup will have 2 Dell NX3100 NAS gateways each with dual quad core processors, 24GB of RAM, 12x2TB data disks, and running Windows Storage Server 2008 64bit as the OS.
Will also have direct attached storage of 2 Dell PowerVault MD1200 disk arrays, each disk array with 12 x 2Terabyte SAS disk drives giving a total of 36TB of storage space for each NAS Gateway.
Based on this information, is there any problem with two Oracle Standard Edition installation (1 per NAS) holding up to 36TB of data (mostly high res images) in this hardware configuration?
Does Oracle have a built in solution for replicating data between the 2 NAS heads and down to the disk arrays? Where the application sever will write to one of the NAS+disk arrays and then that data is written from the first NAS to the 2nd NAS+disk array? Currently I've used DoubleTake in other projects but am wondering if Oracle has something similar that is built in.
Finally, will Backup Exec Oracle agent work with this configuration for backing up the data to a Dell PowerVault ML6020 Tape backup device?
Thanks in advance for any insight.Hi,
Does Oracle have a built in solution for replicating data between the 2 NAS heads and down to the disk arrays? Where the application sever will write to one of the NAS+disk arrays and then that data is written from the first NAS to the 2nd NAS+disk array? Currently I've used DoubleTake in other projects but am wondering if Oracle has something similar that is built in.NAS - I still doubt during the network issues (In case of RAC - all nodes would get afftected), I would not suggest certainly for this. Let the other experts reply back.
- Pavan Kumar N
Maybe you are looking for
-
I can scroll through the dates. They highlight and expand when I scroll through them; some gray (local) and some magenta (on my backup disk). But nothing at all happens when I click, right click, or double click on ANY earlier date. The back/forwa
-
Running Dos program in VMWare/Windows XP environment
My wife's PC bit the dust, and we're trying to determine if Mac could be a solution. The thought is to load VMware Fusion on a new Mac (and load XP). The dos application needs to be able to connect to handheld barcode scanner device (on Com 1 or Com
-
ITunes 8.1 freezes when iPod touch is plugged in, unfreezes when pluggedout
I've just updated my iPod touch 1st gen to 2.2.1, it was working and syncing perfectly for two days. This morning, when the iPod touch was plugged into iTunes, iTunes froze when the iPod was recognized by the computer. I tried plugging the iPod touch
-
Hi - When I use memcmp() function in Solaris 10 i386, it does not seem to work. When I convert the arguments to strings and then use strcmp, they seem to work fine. Is there something I am missing. The same code with memcmp etc works fine on Solaris
-
Transport request : tp check buffer for already imported requests
Hi, I want to put a transport request on my prod. There was no problem on the quality. But on the prod, i have the message : "tp check buffer for already imported requests" and nothing appends. Do you have any idea how i can unlock this transport req