A little question about compiling my own kernel
Hii, How can I know please, wich moudle is loaded in my current kernel and being used?
for example, if I have now the RAID xxxx , and I don't use it, so when compiling my own Kernel, I don't need it.
How can I know wich ones arn't needed?
Ok Lsmod I know, is there any thing else?
Similar Messages
-
Basic Questions About Compiling Source
Hi!
I have some very basic questions about compiling source on 10.6. BTW, if the unix discussions still exist, they've hidden them pretty well, so I hope I'm in the right place for this!
First off, you simply cd to the source dir, wherever it may be - in my case ~/Downloads/source/ - and during the install process, everything will be installed in its proper dir, right?
How do you know which compiler to use? There seem to be several: make, gmake, gcc, g++, etc...
Once you do figure out which compiler to run, the process is supposed to go like this, right?
./configure
make (or whatever)
make install
But this doesn't always work for me. For instance, I'm trying to compile 'arm', but it doesn't seem to have a 'configure' script.
$ ls ~/Downloads/arm
ChangeLog
README
armrc.sample
setup.py
LICENSE
arm
install
/src
Maybe it's that 'setup.py' file? What are you supposed to do?
Of course, it's not only this one that's given me trouble. Sometimes the readme will say I have to edit a certain file for my system. Are there just a few standard changes you always make? Or is it...how can I put it...complicated? How do you find out what's needed in those cases?
OS 10.6.8
Xcode 3.2.4
Python 2.7sudont wrote:
I have some very basic questions about compiling source on 10.6. BTW, if the unix discussions still exist, they've hidden them pretty well, so I hope I'm in the right place for this!
This is the place for UNIX discussions. If you have developer-related questions, there is a forum dedicated to that as well: Developer Forums
First off, you simply cd to the source dir, wherever it may be - in my case ~/Downloads/source/ - and during the install process, everything will be installed in its proper dir, right?
Yes. Hopefully the project you want to install follows standard conventions. If so, you can do "./configure", then "make", and finally "sudo make install" to install the software into "/usr/local".
How do you know which compiler to use? There seem to be several: make, gmake, gcc, g++, etc...
The make file will figure that stuff out.
Once you do figure out which compiler to run, the process is supposed to go like this, right?
./configure
make (or whatever)
make install
Yes, with the addition of "sudo" before "make install" because "/usr/local" is owned by root.
But this doesn't always work for me. For instance, I'm trying to compile 'arm', but it doesn't seem to have a 'configure' script.
$ ls ~/Downloads/arm
ChangeLog
README
armrc.sample
setup.py
LICENSE
arm
install
/src
arm? You mean "arm (anonymizing relay monitor) - Terminal status monitor for Tor relays." You really don't want to be messing with that stuff. The only people involved with Tor that are trustworthy are US Navy intelligence who have their own uses for it. If you don't understand it as well as they do, best stay away. -
Hi!
I wan't to edit and compile my own kernel, so I got even more structure on my machine - and I hope to get a little more performance.
How do I edit and Compile a custom kernel?
[EDIT] And maybe how I make all my own-compiled programs to be compiled to a Pentium4 processor As i could when I was a Gentoo userKris;
There is a wiki for custom kernel generation.
Basically, you can use ABS to obtain the basic kernel and its accompanying patches and config file. ABS is Arch Build System which somewhat automates the making of packages, including custom kernels.
Just run abs in CL and find the download in /var/abs/kernels.
The kernel you desire to customize is found therein.
You customize it by modifying the kernelconfig and setup PKGBUILD file.
Run "makepkg" on the PKGBUILD and then run pacman -A (name of custom kernel.pkg.tar.gz). If using lilo run/sbin/lilo before reboot.
The steps are outlined in the wikis.
Best of luck!!! -
Hello, i have a question about compiling. I always presumed that when a class is compiled and then run, the import statements just tell the executing JVM where to look in the JRE for the said imported classes before loading them into memory (is that correct?)
What's confusing me is when happens when I write an applet which utilises the netscape.javascript.JSObject class (which after JRE 1.4.1 resides in the $JAVA/lib/plugins.jar). I have no idea what happens, as in does the relevant code get compiled into my .bin file or is that jar just referenced on the excecuting clients machine, and if the latter what happens if they are using a version before 1.4.2 where this class is residing in lib/jaws.jar, or does that not make any difference as the package names are still the same?
You can see I'm confused and I think this highlights a fundamental gap in my knowledge,
Thanks for any help!r035198x wrote:
The classes are loaded on application start up. Not allways, no.
<quote-self>Just use the reflections API.. </quote-self>
If the JVM doesn't find the required class then your application probably won't have had a chance to try and catch the exception.That's exakery why we use reflections to try to load classes which may or may-not exist.
PS: Had to do this with xerces classes to ensure we got the right version despite certain versions of Ant prepending it's version to my classpath, darn thing... sort of an "ehaustive search" for class definitions. Booger of an idea. -
Questions about compiling kernel on archlinux
hi...i'm new of this forum and new of archlinux
i'm tryng to compiling a custom kernel in order not to replace the kernel26 package.. i prefere to make it with abs for managing it with pacman.. i followed the wiki but something went wrong.... i used both the pkgbuild i found on the wiki but nothing...can someone help me please??if it can be of any help, this works for me too. actually, it's the official arch kernel PKGBUILD (maybe not the latest one) that I've just changed according to my needs:
# $Id: PKGBUILD 17203 2008-10-26 20:28:29Z tpowa $
# Maintainer: Tobias Powalowski <[email protected]>
# Maintainer: Thomas Baechler <[email protected]>
pkgname=kernel26mm
_basekernel=2.6.27
pkgver=2.6.28
pkgrel=5
_patchname="patch-${pkgver}-${pkgrel}-ARCH"
pkgdesc="The Linux Kernel and modules"
arch=(i686 x86_64)
license=('GPL2')
groups=('base')
url="http://www.kernel.org"
backup=(etc/mkinitcpio.d/${pkgname}.preset)
depends=('coreutils' 'module-init-tools' 'mkinitcpio>=0.5.18')
# pwc, ieee80211 and hostap-driver26 modules are included in kernel26 now
# nforce package support was abandoned by nvidia, kernel modules should cover everything now.
# kernel24 support is dropped since glibc24
replaces=('kernel24' 'kernel24-scsi' 'kernel26-scsi'
'alsa-driver' 'ieee80211' 'hostap-driver26'
'pwc' 'nforce' 'squashfs' 'unionfs' 'ivtv'
'zd1211' 'kvm-modules' 'iwlwifi' 'rt2x00-cvs'
'gspcav1')
install=kernel26mm.install
source=(ftp://ftp.kernel.org/pub/linux/kernel/v2.6/linux-$_basekernel.tar.bz2
ftp://ftp.kernel.org/pub/linux/kernel/v2.6/testing/patch-2.6.28-rc2.bz2
http://kernel.org/pub/linux/kernel/people/akpm/patches/2.6/2.6.28-rc2/2.6.28-rc2-mm1/2.6.28-rc2-mm1.bz2
# the main kernel config files
config config.x86_64
iosched-bfq-03-update-kconfig-kbuild.patch
iosched-bfq-02-add-bfq-scheduler.patch
iosched-bfq-01-prepare-iocontext-handling.patch
zen.git-aircrack.patch
march-native.patch
# standard config files for mkinitcpio ramdisk
kernel26mm.preset)
build() {
KARCH=x86
cd $startdir/src/linux-$_basekernel
# Add -ARCH patches
# See http://projects.archlinux.org/git/?p=linux-2.6-ARCH.git;a=summary
patch -Np1 -i $startdir/src/patch-2.6.28-rc2 || return 1
patch -Np1 -i $startdir/src/2.6.28-rc2-mm1 || return 1
patch -Np1 -i $startdir/src/iosched-bfq-01-prepare-iocontext-handling.patch || return 1
patch -Np1 -i $startdir/src/iosched-bfq-02-add-bfq-scheduler.patch || return 1
patch -Np1 -i $startdir/src/iosched-bfq-03-update-kconfig-kbuild.patch || return 1
patch -Np1 -i $startdir/src/zen.git-aircrack.patch || return 1
patch -Np1 -i $startdir/src/march-native.patch || return 1
sed -i 's|^EXTRAVERSION = .*$|EXTRAVERSION =|g' Makefile
if [ "$CARCH" = "x86_64" ]; then
cat ../config.x86_64 >./.config
else
cat ../config >./.config
fi
# build the full kernel version to use in pathnames
. ./.config
### next line is only needed for rc kernels
#_kernver="2.6.25${CONFIG_LOCALVERSION}"
_kernver="2.6.28${CONFIG_LOCALVERSION}"
# load configuration
make menuconfig
# build!
# stop here
# this is useful to configure the kernel
#msg "Stopping build"
#return 1
make bzImage modules || return 1
mkdir -p $startdir/pkg/{lib/modules,boot}
make INSTALL_MOD_PATH=$startdir/pkg modules_install || return 1
cp System.map $startdir/pkg/boot/System.map26mm
cp arch/$KARCH/boot/bzImage $startdir/pkg/boot/vmlinuz26mm
install -D -m644 Makefile \
$startdir/pkg/usr/src/linux-${_kernver}/Makefile
install -D -m644 kernel/Makefile \
$startdir/pkg/usr/src/linux-${_kernver}/kernel/Makefile
install -D -m644 .config \
$startdir/pkg/usr/src/linux-${_kernver}/.config
mkdir -p $startdir/pkg/usr/src/linux-${_kernver}/include
for i in acpi asm-{generic,x86} config linux math-emu media net pcmcia scsi sound video; do
cp -a include/$i $startdir/pkg/usr/src/linux-${_kernver}/include/
done
# copy files necessary for later builds, like nvidia and vmware
cp Module.symvers $startdir/pkg/usr/src/linux-${_kernver}
cp -a scripts $startdir/pkg/usr/src/linux-${_kernver}
# fix permissions on scripts dir
chmod og-w -R $startdir/pkg/usr/src/linux-${_kernver}/scripts
#mkdir -p $startdir/pkg/usr/src/linux-${_kernver}/.tmp_versions
mkdir -p $startdir/pkg/usr/src/linux-${_kernver}/arch/$KARCH/kernel
cp arch/$KARCH/Makefile $startdir/pkg/usr/src/linux-${_kernver}/arch/$KARCH/
if [ "$CARCH" = "i686" ]; then
cp arch/$KARCH/Makefile_32.cpu $startdir/pkg/usr/src/linux-${_kernver}/arch/$KARCH/
fi
cp arch/$KARCH/kernel/asm-offsets.s $startdir/pkg/usr/src/linux-${_kernver}/arch/$KARCH/kernel/
# add headers for lirc package
mkdir -p $startdir/pkg/usr/src/linux-${_kernver}/drivers/media/video
cp drivers/media/video/*.h $startdir/pkg/usr/src/linux-${_kernver}/drivers/media/video/
for i in bt8xx cpia2 cx25840 cx88 em28xx et61x251 pwc saa7134 sn9c102 usbvideo zc0301; do
mkdir -p $startdir/pkg/usr/src/linux-${_kernver}/drivers/media/video/$i
cp -a drivers/media/video/$i/*.h $startdir/pkg/usr/src/linux-${_kernver}/drivers/media/video/$i
done
# add dm headers
mkdir -p $startdir/pkg/usr/src/linux-${_kernver}/drivers/md
cp drivers/md/*.h $startdir/pkg/usr/src/linux-${_kernver}/drivers/md
# add inotify.h
mkdir -p $startdir/pkg/usr/src/linux-${_kernver}/include/linux
cp include/linux/inotify.h $startdir/pkg/usr/src/linux-${_kernver}/include/linux/
# add CLUSTERIP file for iptables
mkdir -p $startdir/pkg/usr/src/linux-${_kernver}/net/ipv4/netfilter/
# add wireless headers
mkdir -p $startdir/pkg/usr/src/linux-${_kernver}/net/mac80211/
cp net/mac80211/*.h $startdir/pkg/usr/src/linux-${_kernver}/net/mac80211/
# add dvb headers for external modules
# in reference to:
# http://bugs.archlinux.org/task/9912
mkdir -p $startdir/pkg/usr/src/linux-${_kernver}/drivers/media/dvb/dvb-core
cp drivers/media/dvb/dvb-core/*.h $startdir/pkg/usr/src/linux-${_kernver}/drivers/media/dvb/dvb-core/
# add dvb headers for external modules
# in reference to:
# http://bugs.archlinux.org/task/11194
mkdir -p $startdir/pkg/usr/src/linux-${_kernver}/include/config/dvb/
cp include/config/dvb/*.h $startdir/pkg/usr/src/linux-${_kernver}/include/config/dvb/
# add xfs and shmem for aufs building
mkdir -p $startdir/pkg/usr/src/linux-${_kernver}/fs/xfs
mkdir -p $startdir/pkg/usr/src/linux-${_kernver}/mm
cp fs/xfs/xfs_sb.h $startdir/pkg/usr/src/linux-${_kernver}/fs/xfs/xfs_sb.h
# add vmlinux
cp vmlinux $startdir/pkg/usr/src/linux-${_kernver}
# copy in Kconfig files
for i in `find . -name "Kconfig*"`; do
mkdir -p $startdir/pkg/usr/src/linux-${_kernver}/`echo $i | sed 's|/Kconfig.*||'`
cp $i $startdir/pkg/usr/src/linux-${_kernver}/$i
done
cd $startdir/pkg/usr/src/linux-${_kernver}/include && ln -s asm-$KARCH asm
chown -R root.root $startdir/pkg/usr/src/linux-${_kernver}
find $startdir/pkg/usr/src/linux-${_kernver} -type d -exec chmod 755 {} \;
cd $startdir/pkg/lib/modules/${_kernver} && \
(rm -f source build; ln -sf ../../../usr/src/linux-${_kernver} build)
# install fallback mkinitcpio.conf file and preset file for kernel
install -m644 -D $startdir/src/${pkgname}.preset $startdir/pkg/etc/mkinitcpio.d/${pkgname}.preset || return 1
# set correct depmod command for install
sed -i -e "s/KERNEL_VERSION=.*/KERNEL_VERSION=${_kernver}/g" $startdir/kernel26mm.install
echo -e "# DO NOT EDIT THIS FILE\nALL_kver='${_kernver}'" > ${startdir}/pkg/etc/mkinitcpio.d/${pkgname}.kver
# remove unneeded architectures
rm -rf $startdir/pkg/usr/src/linux-${_kernver}/arch/{alpha,arm,arm26,avr32,blackfin,cris,frv,h8300,ia64,m32r,m68k,m68knommu,mips,mn10300,parisc,powerpc,ppc,s390,sh,sh64,sparc,sparc64,um,v850,xtensa}
just replace all the kernel26mm by kernel26mycustomkernelname. same below with the install file and of course in your kernel config.
# arg 1: the new package version
# arg 2: the old package version
KERNEL_VERSION=2.6.28-mm
post_install () {
# updating module dependencies
echo ">>> Updating module dependencies. Please wait ..."
depmod $KERNEL_VERSION > /dev/null 2>&1
# generate init ramdisks
echo ">>> MKINITCPIO SETUP"
echo ">>> ----------------"
echo ">>> If you use LVM2, Encrypted root or software RAID,"
echo ">>> Ensure you enable support in /etc/mkinitcpio.conf ."
echo ">>> More information about mkinitcpio setup can be found here:"
echo ">>> http://wiki.archlinux.org/index.php/Mkinitcpio"
echo ""
echo ">>> Generating initial ramdisk, using mkinitcpio. Please wait..."
/sbin/mkinitcpio -p kernel26mm
post_upgrade() {
pacman -Q grub &>/dev/null
hasgrub=$?
pacman -Q lilo &>/dev/null
haslilo=$?
# reminder notices
if [ $haslilo -eq 0 ]; then
echo ">>>"
if [ $hasgrub -eq 0 ]; then
echo ">>> If you use the LILO bootloader, you should run 'lilo' before rebooting."
else
echo ">>> You appear to be using the LILO bootloader. You should run"
echo ">>> 'lilo' before rebooting."
fi
echo ">>>"
fi
if grep "/boot" /etc/fstab 2>&1 >/dev/null; then
if ! grep "/boot" /etc/mtab 2>&1 >/dev/null; then
echo "WARNING: /boot appears to be a seperate partition but is not mounted"
echo " This is most likely not what you want. Please mount your /boot"
echo " partition and reinstall the kernel unless you are sure this is OK"
fi
fi
# updating module dependencies
echo ">>> Updating module dependencies. Please wait ..."
depmod -v $KERNEL_VERSION > /dev/null 2>&1
echo ">>> MKINITCPIO SETUP"
echo ">>> ----------------"
echo ">>> If you use LVM2, Encrypted root or software RAID,"
echo ">>> Ensure you enable support in /etc/mkinitcpio.conf ."
echo ">>> More information about mkinitcpio setup can be found here:"
echo ">>> http://wiki.archlinux.org/index.php/Mkinitcpio"
echo ""
echo ">>> Generating initial ramdisk, using mkinitcpio. Please wait..."
if [ "`vercmp $2 2.6.19`" -lt 0 ]; then
/sbin/mkinitcpio -p kernel26mm -m "ATTENTION:\nIf you get a kernel panic below
and are using an Intel chipset, append 'earlymodules=piix' to the
kernel commandline"
else
/sbin/mkinitcpio -p kernel26mm
fi
if [ "`vercmp $2 2.6.21`" -lt 0 ]; then
echo ""
echo "Important ACPI Information:"
echo ">>> Since 2.6.20.7 all possible ACPI parts are modularized."
echo ">>> The modules are located at:"
echo ">>> /lib/modules/$(uname -r)/kernel/drivers/acpi"
echo ">>> For more information about ACPI modules check this wiki page:"
echo ">>> 'http://wiki.archlinux.org/index.php/ACPI_modules'"
fi
op=$1
shift
$op $*
Last edited by bangkok_manouel (2009-01-27 13:55:34) -
How would I go about compiling my own multilib GCC?
Hello, as the topic says, how would I compile my own multilib GCC? I realize there is one on the AUR, but it is version 4.4.0, which has issues with compiling FFMpeg and MPlayer(both of which I compile daily), and the fact that the pkgbuild has to be manually updated in a few areas to work in the first place. I'm thinking about updating the PKGBUILD myself, actually(well creating a parallel build in any case, because the author hasn't actually orphaned it, but he seems to have disappeared).
I've found a few documents on the subject but they are distro specific, or use scripts. I don't want to be a script kiddie, I want to know what the "base" configure parameters are for setting one up. I can figure out the rest.
Slackware scripts link here.
-What ./configure parameters create a multi-lib system? What specifies the multilib compile to be 32-bit and 64-bit?(I read that sometimes people use multi-lib for other features so I want to make sure it's 32-bit and 64-bit, not something else)
-How would I make this build compatible with the /opt/lib32 system that Arch users like to use?
-Do I only need to compile binutils, glibc, and gcc? And lastly, does this replace the "regular" gcc on the system?(aka if I created a PKGBUILD for this, would gcc still be named gcc or would it be renamed gcc-multilib and have it's own directories for libs)
Last edited by MP2E (2009-10-01 01:03:40)I'd start by going here:
http://aur.archlinux.org/packages.php?ID=28545 -
Two little questions about ROWID
Hello friends at www.oracle.com ,
as we know, each time we insert a new line, Oracle creates an unique ROWID for that line. I have 2 questions about ROWID:
1. Is there a way for me to obtain ROWID value at the moment of insertion, other than doing a SELECT rowid FROM (table) WHERE (primary key inserted values) to obtain the ROWID that was generated for that line?
2. If, for any reason, we have a so big database that all ROWID possible combinations are over, how would Oracle handle such situation? I believe it's something quite rare to happen but, anyway, there's the possibility as Murphy Laws have taught us :)
Thanks for all answers, and best regards,
Franklin Gongalves Jr.1. The (quite new) DML RETURNING clause allwos this:
Eg. insert into bonus values ('Bill', 'work',100, 4) returning rowid into :x
This works in 8.1.7 and 9.0.1 I don't know about earlier.
2. A rowid includes the physical address of the data. Before you used all the rowid's, the database wouldn't be able to hold any more data. The limit is big, but not infinite. If you hit it, try going to distributed databases using database links. -
A little question about inheritance
Can someone explain this to me?
I have been reading about inheritance in Java. As I understand it when you extend a class, every method gets "copied" to the subclass. If this is so, how come this doesn't work?
class inherit {
int number;
public inherit(){
number = 0;
public inherit(int n){
number = n;
class inherit2 extends inherit{
public inherit2(int n, int p){
number = n*p;
class example{
public static void main(String args[]){
inherit2 obj = new inherit2();
}What I try to do here is to extend the class inherit with inherit2. Now the obj Object is of inherit2 class and as such, it should inherit the constructor without parameters in the inherit class or shouldn't it ??? If not, then should I rewrite all the constructors which are the same and then add the new ones??I believe you were asking why SubClass doesn't have the "default" constructor... after all, shouldn't SubClass just have all the contents of SuperClass copy-pasted into it? Not exacly. ;)
(code below... if you'd like, you can skip the first bit, start at the code, and work your way down... depending on if you just started, the next bit may confuse rather than help)
Constructors are special... interfaces don't specify them, and subclasses don't inherit them. There are many cases where you may not want your subclass to display a constructor from it's superclass. I know this sounds like I'm saying "there are many cases where you won't want a subclass to act exactly like a superclass, and then some (extend their functionality)", but its not, because constructors aren't how an object acts, they're how an object gets created.
As mlk said, the compiler will automatically create a default constructor, but not if there is already a constructor defined. So, unfortunatley for you, there wont be a default constructor made for SubClass that you could use to create it.
class SuperClass { //formerly inherit
int number;
public SuperClass () { //default constructor
number = 0;
public SuperClass (int n) {
number = n;
class SubClass extends SuperClass { //formerly inherit2
//DEFAULT CONSTRUCTOR, public SubClass() WILL NOT BE ADDED BY COMPILER
public SubClass (int n, int p) {
number = n*p;
class Example {
public static void main(String [] args) {
//attempted use of default constructor
//on a default constructorless subclass!
SubClass testSubClass = new SubClass();
If you're still at a loss, just remember: "Constructors aren't copy-pasted down from the superclass into the subclass!" and "Default constructors aren't added in if you add your own constructor in" :)
To get it to work, you'd have to add the constructor you used in main to SubClass (like doopsterus did with inheritedClass), or use the constructor you defined in SubClass for when you make a new one in main:
inherit2 obj = new inherit2(3,4);
Hope that cleared things up further, if needed. By the way, you should consider naming your classes as a NounStartingWithACapital, and only methods as a verbStartingWithALowercase -
Question about ASE 15.7 kernel mode – process vs threaded
Hi there,
This question is related to upgrading Sybase ASE Server
from version 15.5 to version 15.7 on Sun Solaris SPARC Operating System
(Solaris 10). For the upgrade, I need to choose between threaded kernel and
process kernel mode.
1> Is the process kernel mode in ASE 15.7 identical to
process kernel in ASE 15.5? Are there any changes to process kernel mode
between version 15.5 and 15.7? With all the other configurations on the Solaris
host being same (Number of CPUs, RAM, Storage, Sybase ASE Configuration
parameter values like total memory, engines), will the performance on ASE 15.7
in process kernel mode be same as on ASE 15.5?
If there are differences in performance, what are they? What should I be monitoring on ASE 15.7 to see
if there is performance issue on the ASE Server?
2> For the threaded kernel mode, are there any performance
overheads like more CPU usage? We are not planning to change the hardware
configuration of the Solaris host during the upgrade. What values should I be monitoring at the OS
level OR Sybase ASE level to make sure that there are no performance issues? Is
there any particular parameter that needs to be monitored closely when we
change to threaded kernel mode?
Are there any Sybase Configuration parameters that I need
to tune for the New threaded kernel mode? If so, do you have any performance
tuning tips (white papers, technical documents) for threaded kernel mode?
3> What is SAP Sybase recommendation on which Kernel mode I
should be using? I researched the SAP Knowledge Base articles for
recommendations on which Kernel mode to use and I could not find a definitive answer.
Thanks,
RajActually with Solaris on SPARC, there is a very key consideration - you should be on ASE 15.7 sp50 or sp100 or higher AND have applied the kernel jumbo patch (Oracle BugID 16054425) from Solaris that fixes the issue in which they prohibited OS threads from issuing KAIO calls. Then in ASE, you need to configure the 'solaris async i/o mode' parameter.
The changes between threaded and process mode kernels should be transparent to your application. Generally speaking, SAP recommends 'threaded' kernel mode - and specifically for SAP applications only certifies 'threaded' kernel mode. For custom applications, you *could* use either - although, depending on how many engines you run and what chipset you are on, you might see better performance with one vs. the other - typically threaded kernel mode is better, although there are exceptions.
Generally, the main difference is that in process kernel mode, each engine did disk and network IO polling. Due to OS security restrictions, this often meant that a engine had to process the disk IO in which it submitted...or for networking, your SPID had to run on your network engine for any network IO. In threaded mode, there is a single parent process with engine tasks (threads) and IO tasks (threads). Consequently, for customers used to running in the 10's of engines, we suggest a starting point of reducing the number of engines by 2 to give the IO tasks CPU time.
One reason why threaded mode is often better is due to OS scheduling. With hardware threaded cores, most OS dispatch schedulers do a better job of spreading the threads across the cores before utilizing the threads when a process is multi-threaded with POSIX threads - then a multi-process/kernel threaded implementation. At one customer, we proved that we consistently got the best performance with threaded kernel and the only way we could match it with process kernel was to use OS cpu affinity to bind the ASE engine processes to individual cores.
However, one consideration is that older SPARC Niagara chips (T1-T4) were notoriously bad at network handling for some reason. As a result, one of the recommendations for ASE on SPARC Niagara chips is to run the same number of network engines as the number of cores ASE is using. One solution to that is in Solaris 11 (I think) where Solaris finally started being able to handle network interrupts better - I also think the T5/T6 are much better chips - but I have no customer experience data to go on there. -
Question about Compiler Installation
Hi everyone. Haven't been in this forum for a while.
Anyho, I was wondering. I'm about to run a compiler program called "JCreator LE", and I came upon this message:
"In order for you to make use of the java documentation, you need to have a recent version of JDK JavaDocs installed on your system."
Select the JavaDocs home directory: eg. C:\jdk1.4\docs
I tried looking into my jdk1.4.1_04 folder, but I couldn't locate the source.
So what is JavaDocs, and what is the most appropriate site(s) to download this source?
thanks, GregThe JavaDocs are basically the Java API Documentation. You can download any version at http://java.sun.com/docs/ or you can view the complete API docs online at http://java.sun.com/j2se/1.4.2/docs/api/index.html
-
Question about compiled byte code
Howdy folks,
I have a client who wants to run an application in both
on-line and off-line modes. The user could run the application
locally on their laptop making changes and such which would get
stored to local database files (probably FoxPro free tables just to
make it easier on me). Then when the user got back to their
internet connection they could run the application and it would
sync with the online tables (probably MySql tables at that point).
So the question is, if I compile Cold Fusion code into Java
byte code, will it be able to execute independantly of the Cold
Fusion Server? I realize that I could load ColdFusion on the user's
laptop, but I don't think I want to do that. I'm assuming that the
answer to my question will be "No. You can't do that. Cold Fusion
isn't meant to work like that." To which my next question would be,
"Well, what language would be best for the type of application I
have described above? Action Script, maybe?"
Any thoughts are welcome, especially if you've written an
application like the one I've described.
Thanks very much,
ChrisWell, rats.
I wrote a nice reply to your message BKBK, but lost it
because, apparently, my session timed out.
The basic jist, was that I've been working on AJAX, and have
been implementing some AJAX-like techniques at some other clients
(using hidden iframes combined with innerHTML -- I know not a
standard, but darn handy otherwise), but I couldn't see how that
would solve my on-line/off-line problem (unless I stuck with the
cookies idea).
I also did some reading on cookies last night (obviously, I
don't use cookies very often if at all in my daily coding), and I'm
a bit put off by the different browser limitations. I'd hate my
client to be chugging along, entering appointments into the
"database" (read: data being stored as cookies to be sync'd later
when the user goes online), and then suddenly run into the cookie
limitation. On top of that, if I'm reading right, IE (my client's
most likely choise of browser), will not let you know that you've
reached this limit, but will just begin dropping the older cookies
in favor of the newer ones. If I could programmatically sense this
limitation and then write the cookies to some file before
continuing that'd be geat, but since JavaScript can't write files
(that I know of) this isn't feasable. Also, if I could write a file
like that, I wouldn't bother with the cookies.
I think I'm going to end up writing it in FoxPro since my
company has a bunch of copies of it (and it's licenced per
developer and not per copy), and there are lots of folks in my
company who can help me get up to speed. That also means that I'll
probably need to write a web version of the code for when my
client's client's (does that make sense? :-) ) connect to the app
via the internet.
Anyway, I'm really enjoying everyones comments on the
subject. Can anybody think of a technique for a way around the
cookie limitations? Or perhaps another language that this whole
thing could be written in?
I really wish that I could compile my ColdFusion code for use
independant of the CF server. I know, that's not the way it works
and typically not what scripting languages like this are used for.
I suppose I could always install the developer's version of CF on
the user's local machine, write the code in CF and then just detect
whether or not the user is online and behave accordingly. -
Question about compiling with different jdk's
Hi, I have a questions which I can't find an answer for.
If I compile my program with jdk 1.6, will some1 be able to run it
using jre 1.5 ?
a) Assuming I don't use any 1.6 new features
b) Assuming I use 1.6 new features.
Btw. I would check myself on other computer with jre 1.5, but right now I can't :-(Thank you for your reply.
So when I use 1.6 jdk, I can compile my program with
these flags and
I'll be able to run it on the specified release jre
?yes, unless the system is broken in the beta of course.
In other words there is no way that an app compiled
without these flags,
worked on older jre ?yup. Though some betas might default to generating classfiles for 5.0 runtimes by default (some 5.0 betas generated 1.4 classfiles for example). -
A question about compilation albums, and how they are sorted...
Hi all,
Right I have loads of one off songs by many artist, or I just have the singles from albums, etc. To save having hundreds of single songs on their own I put them all together in a self-made compilation album with the decade as the album. So I have a 1960s, 1970s, 1980s, 1990s and 2000s album. I also have a Disney album, with all different movies as the artist, I do the same with Musicals. It's just helps with tidying up my library. When I sort by Album I would like the artists to be listed alphabetically, which doesn't always happen. I don't know what to do to make it happen. Especially if I have a full musical soundtrack that has the tracks listed, and I have a few of them. Instead of each artist being separate the tracks are all muddled up and listed numerically. For example the first track of The Book of Mormon is followed by the first track of RENT, then the second tracks come, all the numbered tracks are bundled. I hope that makes sense.
The other issue is when I transfer these songs to my iPhone. I want the album to appear as one album, but I want the artists to be separate. So I want one 'Songs from the Musicals' album but I want to look at artist and see every show listed separately. Again, I hope this makes.
I've seen how to add an 'Album Artist' which I tried, but that just gathers it up all into one artist. So to be clear, I want one album with multiple artists on my iPhone.
If you have any tips or help, it would be greatly appreciated as it's becoming increasingly frustrating trying out different combinations of options to see what might work.
Thank you.Thanks, if it's a feature of PL/SQL, that will be nice.
But both are bind variables, one is on bind variables peeking and the other is not. it's better for me to keep in mind it has a switch to control the peeking when using bind variables in PL/SQL, as I thought there is no way to avoid peeking when using bind variables... -
A little question about the WWDC 2006 and Macbook
I was considering to buy my first Macbook, no, my first mac EVER these days, but after i heard of the WWDC 2006 i'm afraid to miss some new upgrades made on the Macbook when i buy it next week.
But someone told me too, that even if there are Upgrades, i probably have to wait them to come to Europe for about 2 month - and thats much to long to wait for me cause i need it for work in the next four weeks !
Hope you can say me whats to do
Greetings
Agent DarkbootyThank you very much for your help
But i've got still a question ..
..would you buy the macbook next week or would you wait untill the end of the conference ?
Seems like there will only be software updates for the macbook -
A Little question about Pixel Aspect Ratio
This doubt has been bugging me since I started edit HD formats.It's about pixel aspect ratio.
Let's supose I have received some material in HD format,for instance.But I will deliver this material in another format, DV NTSC,for instance.
The Pixel aspect ratio format of the material what I received and the way how I will deliver are different.What can I do to avoid this problem? Do I need to apply some plugin to solve this problem or when I export the final sequence Final Cut does this automatically?
thank youThe software takes care of it for you.
As long as your conversion maintains the overall aspect ratio (ie 16:9), it is irrelevant what the individual pixels are doing.
For example, if I convert DVCProHD 720p to ProRes 720p, it will look fine even though the DVCProHD started out with 960 pixels in the x dimension and the ProRes will have 1280.
x
Maybe you are looking for
-
my ant video player after i updated to firefox four is now in the upper part of the screen where it opens up and i cant find out how to fix it
-
Problem executing a Java Stored Procedure from Forms6i
I'm executing a Stored Procedure which invokes a Java Stored procedure.Till the invoking it works fine but for accessing a particular class it gives an error: PDE-PLU022 Don't have access to the stored program unit XMLPARSERCOVER in schema CARE where
-
GF Ti4800SE - Blurry Through normal D-sub/vga port
I've got a GF Ti4800SE card, connecting it to my monitor (21in CRT, running at 1600x1200@85Hz) through the normal D-sub port the image quality is very blurry. However if I connect it using the DVI port and the DVI-VGA converter the image is fine. I'd
-
Hi everybody, Our users want to print the screen application with all the data in pages. Is it possible to call "Gadwin Print screen" directly by the HTML DB application ? Thank. Bye.
-
How to disable print in Flashpaper?
I want to disable print icon in Flashpaper but can not do. Anyone can help me?