Solaris 10 NFS performance on Linux running ws3 update 3

Hope someone can help me sort out this problem.
Dear Support.
We are having a Solaris/Sparc file server running Solaris 10. The Solaris machine act as a NFS file server. We encounter very poor NFS performance when copying files to and from a filesystem via Linux NFS
I have set up a very simple test scenario. Created a tar file, size around 3 GB. The file is sitting on a SAN system. The file it self is created on a Solaris 10 UFS filesystem.
Solaris E240 Solaris 8 NFS, GB interface copy to and from the same disk via NFS
timecp /seis/seis600_new/usr.tar /seis/seis600_new/new1.tar
real 2m18.91s
user 0m0.11s
sys 0m29.72s
IBM/AMD 64 bit Linux WS3 U5, GB interface NFS to and from the same disk via NFS.
time cp /seis/seis600_new/usr.tar /seis/seis600_new/new1.tar
real 6m24.670s
user 0m0.130s
sys 0m21.860s
Also run the test on other Linux boxes with similar results?
The funny part of this is that I can reproduce the performance problem on other SUN systems among the SUN Blade 2000 with 8 GB Ram
Let me wrap up.
Always bad NFS performance when using NFS between Solaris NFS server and Linux client.
Not always bad performance when using NFS between Solaris server and clients.

It's been a while since I was doing linux->solaris nfs, so bear with me as I clear out the cobwebs.
First things to check: Mount options for the nfs mount to the server.
Which versions of nfs are you using?(v2, v3?) Solaris uses version 3 mounts by default.
What's your wsize and rsize for reads and writes?
I believe linux is limited to using 8k r/w block sizes. Solaris will let you use r/wsize up to 32k in nfsv3, which would really help with larger data transfers.
nfsv3 has a number of performance enhancements over v2, so give that a shot with a larger block size.
nfsvers=3,wsize=8192,rsize=8192,nolock,intr
Also experiment with your locking options, that might help some.
NFS has always been a fairly weak point for linux.
A few links for reference:
http://www.scd.ucar.edu/hps/TECH/LINUX/linux.html
http://nfs.sourceforge.net/
Cheers && good luck,
fptt.

Similar Messages

  • NFS performance with Solaris 10

    Hello,
    We have been playing with one of the x4200s running s10u2, or snv_50 for that matter, and are getting terrible numbers from the NFS performance. Initially, we suspected it was just the ZFS filesystem on the back (which it was, though zil_disable made it a lot better), but even after exploring a little I am getting terrible numbers for NFS backed by UFS. Using afio to unafio a file on the disk gives:
    Local:
    afio: 432m+131k+843 bytes read in 263 seconds. The operation was successful.
    Remote:
    afio: 432m+131k+843 bytes read in 1670 seconds. The operation was
    successful.
    I have raised the ncsize to 1000000, and upped the server threads to 1024.
    The same thing on a linux box(ext3) turns in local times of 100 seconds and remote at 180 seconds. The differences in the local and remote numbers are just crazy. The difference in the ZFS is way worse:
    Local zfs:
    afio: 432m+131k+843 bytes read in 137 seconds. The operation was successful.
    NFS -> ZFS:
    afio: 432m+131k+843 bytes read in 2428 seconds. The operation was
    successful.
    I have started looking into dtrace for tracking the problem, but don't have much to report yet.
    Any suggestions appreciated.

    Ask this on the Solaris Forum, not the Java Networking forum.
    Edit: typo

  • Parameters of NFS in Solaris 10 and Oracle Linux 6 with ZFS Storage 7420 in cluster without database

    Hello,
    I have ZFS 7420 in cluster and OS Solaris 10 and Oracle Linux 6 without DB and I need mount share NFS in this OS and I do not know which parameters are the best for this.
    Wich are the best parameters to mount share NFS in Solaris 10 or Oracle Linux 6?
    Thanks
    Best regards.

    Hi Pascal,
    My question is because when We mount share NFS in some servers for example Exadata Database Machine or Super Cluster  for best performance we need mount this shares with specific parameters, for example.
    Exadata
    192.168.36.200:/export/dbname/backup1 /zfssa/dbname/backup1 nfs rw,bg,hard,nointr,rsize=131072,wsize=1048576,tcp,nfsvers=3,timeo=600 0 0
    Super Cluster
    sscsn1-stor:/export/ssc-shares/share1      -       /export/share1     nfs     -       yes     rw,bg,hard,nointr,rsize=131072,wsize=131072,proto=tcp,vers=3
    Now,
    My network is 10GBE
    What happen with normal servers only with OS (Solaris and Linux)?
    Which parameters I need use for best performance?
    or are not necessary specific parameters.
    Thanks.
    Best regards.

  • License for cross-compilation for solaris 10 sparc on Linux x86

    I'd like to do cross-compilation for solaris 10 sparc on Linux x86 using gcc (for linux). To do that, I have to copy libraries (/lib/64) and includes (/usr/include) from a sparc machine to my linux machine.
    The compilation will be run on about (up to) 50 Linux machines (by various developers). We also have 3 solaris-10-SPARC machines.
    I wonder if Solaris license allows me to copy the includes and libs to perform compilation elsewhere.
    I also checked "OTN License Agreement for Oracle Solaris", but it looks like Oracle allows for installing "the programs" on up to 3 machines, but I need it on 50.
    Thanks for any suggestions or redirections to a proper place where I can get an answer.
    Marek

    When installing Solaris 10 01/06 on a Dell 1850 I receive an error message during the install saying "no disk found". I assume that the drive/controller is not recognized. The Dell 1850 is listed under the HCL for Solaris 10 10/06. I don't believe I can use the Solaris(TM) Device Driver for the LSI MegaRAID Adapter floppy with 1/06. I don�t have any other Solaris boxes up so I can�t build a jump start server. Any suggestions?

  • ISCSI, AFP, SMB, and NFS performance with Mac OS X 10.5.5 clients

    Been doing some performance testing with various protocols related to shared storage...
    Client: iMac 24 (Intel), Mac OS X 10.5.5 w/globalSAN iSCSI Initiator version 3.3.0.43
    NAS/Target: Thecus N5200 Pro w/firmware 2.00.14 (Linux-based, 5 x 500 GB SATA II, RAID 6, all volumes XFS except iSCSI which was Mac OS Extended (Journaled))
    Because my NAS/target supports iSCSI, AFP, SMB, and NFS, I was able to run tests that show some interesting performance differences. Because the Thecus N5200 Pro is a closed appliance, no performance tuning could be done on the server side.
    Here are the results of running the following command from the Terminal (where test is the name of the appropriately mounted volume on the NAS) on a gigabit LAN with one subnet (jumbo frames not turned on):
    time dd if=/dev/zero of=/Volumes/test/testfile bs=1048576k count=4
    In seconds:
    iSCSI 134.267530
    AFP 140.285572
    SMB 159.061026
    NFSv3 (w/o tuning) 477.432503
    NFSv3 (w/tuning) 293.994605
    Here's what I put in /etc/nfs.conf to tune the NFS performance:
    nfs.client.allow_async = 1
    nfs.client.mount.options = rsize=32768,wsize=32768,vers=3
    Note: I tried forcing TCP as well as used an rsize and wsize that doubled what I had above. It didn't help.
    I was surprised to see how close AFP performance came to iSCSI. NFS was a huge disappointment but it could have been limitations of the server settings that could not have been changed because it was an appliance. I'll be getting a Sun Ultra 64 Workstation in soon and retrying the tests (and adding NFSv4).
    If you have any suggestions for performance tuning Mac OS X 10.5.5 clients with any of these protocols (beyond using jumbo frames), please share your results here. I'd be especially interested to know whether anyone has found a situation where Mac clients using NFS has an advantage.

    With fully functional ZFS expected in Snow Leopard Server, I thought I'd do some performance testing using a few different zpool configurations and post the results.
    Client:
    - iMac 24 (Intel), 2 GB of RAM, 2.3 GHz dual core
    - Mac OS X 10.5.6
    - globalSAN iSCSI Initiator 3.3.0.43
    NAS/Target:
    - Sun Ultra 24 Workstation, 8 GB of RAM, 2.2 GHz quad core
    - OpenSolaris 2008.11
    - 4 x 1.5 TB Seagate Barracuda SATA II in ZFS zpools (see below)
    - For iSCSI test, created a 200 GB zvol shared as iSCSI target (formatted as Mac OS Extended Journaled)
    Network:
    - Gigabit with MTU of 1500 (performance should be better with jumbo frames).
    Average of 3 tests of:
    # time dd if=/dev/zero of=/Volumes/test/testfile bs=1048576k count=4
    # zpool create vault raidz2 c4t1d0 c4t2d0 c4t3d0 c4t4d0
    # zfs create -o shareiscsi=on -V 200g vault/iscsi
    iSCSI with RAIDZ2: 148.98 seconds
    # zpool create vault raidz c4t1d0 c4t2d0 c4t3d0 c4t4d0
    # zfs create -o shareiscsi=on -V 200g vault/iscsi
    iSCSI with RAIDZ: 123.68 seconds
    # zpool create vault mirror c4t1d0 c4t2d0 mirror c4t3d0 c4t4d0
    # zfs create -o shareiscsi=on -V 200g vault/iscsi
    iSCSI with two mirrors: 117.57 seconds
    # zpool create vault mirror c4t1d0 c4t2d0 mirror c4t3d0 c4t4d0
    # zfs create -o shareiscsi=on -V 200g vault/iscsi
    # zfs set compression=lzjb vault
    iSCSI with two mirrors and compression: 112.99 seconds
    Compared with my earlier testing against the Thecus N5200 Pro as an iSCSI target, I got roughly 16% better performance using the Sun Ultra 24 (with one less SATA II drive in the array).

  • NFS Performance

    I have 2 questions about NFS on 10.4.
    Client;
    Has NFS performance improved on the client side? Last time I tested was 10.3, and the sustained throughput was about 10-12 MB/s on a gig connection. This was from a Sun NFS server.
    Server;
    Has the performance improved? I am thinking about doing an Xsan NFS re-share to 50+ Linux machines in a compute farm. Will this work out well?
    I'm interested to hear from anybody doing heavy NFS serving.
    Thanks,
    David

    The client is somewhat lacking.
    On one test here (XServe G5 client talking to XServe RAID 5 array connected to XServe G5 NFS server) I get around 40MB/sec copying a file to the RAID over a gigabit ethernet network.
    By comparison, a Solaris machine talking to the same server gets almost 80MB/sec.
    So it sounds like it's improved some from when you last tested, but maybe not by as much as you'd like.
    Note that these tests were done on a single active client (or maybe some minor background traffic going on at the same time).
    As for the server side, I don't know quite where that tops out. A quick test here shows little difference in times even when multiple clients are writing to the RAID at the same time. The server might be able to keep up with the RAID speed.

  • Macbook will not boot up after running Software Update (Couldn't fix it)

    Hi there,
    I also have been experiencing some problems after running Software Update (Security update + Java) a couple of days ago. Similarly to Jandaf (http://discussions.apple.com/thread.jspa?messageID=9000023) I have only the gray start up screen + Apple logo + gear wheel constantly rotating. Based on the information provided in this forum I have tried some procedures which didn't solved my problem so far. Below I describe I have tried:
    1) I have booted using the installation DVD and run the disk utility. When performing the First Aid/Verify Disk I found some errors ('Invalid node structure'). But I couldn't fix the problem using disk repair (same problem 'Invalid node structure' 'Volume check failed'.
    2) I also tried to boot in single user mode (command + s) to run fsck -fy. It didn't work. I got the following message (last 4 lines):
    CSRHIDTTransitionDriver::probe: -s
    CSRHIDTTransitionDriver::probe: booting in single user .. do not match
    Extension "com.apple.driver.AppleUSBTCKeyboard" has no kernel dependency.
    Extension "com.apple.driver.AppleUSBTCEventDriver" has no kernel dependency.
    3) When trying to boot in safe mode. I got a grey screen with the following message:
    'You need to restart your computer. Hold down the power button for several seconds or press the Restart button".
    4) I also tried to 'Archieve & Instal'. Again no success. After reading the DVD the action is terminated because some errors (does not specify).
    5) I also tried the 'German Medicine' (http://discussions.apple.com/message.jspa?messageID=8728797#8728797) (i.e. Connect the 'problematic MacBook' to my old G4 via firewire and reinstall the 2009-001 update). Again no success. The G4 cannot mount the MacBook hardrive.
    I don't know what to do any more.
    A) Can I resuscitate my old system? If yes how?
    B) If not, can I at least save the data I have in my old HD?
    C) Is it possible to format and reinstall the OS or is this this computer doomed?
    Apologies for writing so many questions but I'm really freaking out!
    Thank you.
    B.

    Hi BDAqua,
    I did as you said and got a firewire backup drive (Seagate - FreeAgent Desk 500GB). I tried to make a carbon copy of HD but it seems that the copy had some problems. Below I report both the CCC and console logs:
    1) "Cloning method: Backup everything
    Delete items on target that aren't on the source: No
    18:23:39 Enabling ownership on the target volume...
    18:23:50 The target volume has ownership enabled.
    18:23:50 The target volume has Access Control Lists enabled.
    18:23:50 Authenticating...
    18:23:50 Initiating synchronization engine...
    18:23:51 Cloning...
    18:31:20 rsync: readdir("/Applications/Utilities/Audio MIDI Setup.app/Contents/Resources/French.lproj"): Input/output error (5)
    19:04:41 rsync: readdir("/System/Library/Components/AudioCodecs.component/Contents/Resources/Fr ench.lproj"): Input/output error (5)
    19:05:41 rsync: readdir("/System/Library/CoreServices/Menu Extras/IrDA.menu/Contents/Resources/French.lproj"): Input/output error (5)
    19:06:01 rsync: readdir("/System/Library/CoreServices/Menu Extras/User.menu/Contents/Resources/French.lproj"): Input/output error (5)
    19:07:05 rsync: readdir("/System/Library/Extensions/IOUSBFamily.kext/Contents/PlugIns/AppleUSBC DCECMData.kext/Contents/Resources/French.lproj"): Input/output error (5)
    19:10:13 rsync: readdir("/System/Library/PrivateFrameworks/Assistant.framework/Versions/A/Resou rces/French.lproj"): Input/output error (5)
    19:50:10 rsync error: some files/attrs were not transferred (see previous errors) (code 23) at /Volumes/Home/Users/bombich/Development/Bombich_Software/rsync-3.0.5pre2/main.c (1047) [sender=3.0.5pre2] (51)
    19:50:11 rsync: Some errors were encountered during the backup., Error code: 51
    19:50:14 Sync Engine warnings: (
    "rsync: readdir(\"/Applications/Utilities/Audio MIDI Setup.app/Contents/Resources/French.lproj\"): Input/output error (5)",
    "rsync: readdir(\"/System/Library/Components/AudioCodecs.component/Contents/Resources/F rench.lproj\"): Input/output error (5)",
    "rsync: readdir(\"/System/Library/CoreServices/Menu Extras/IrDA.menu/Contents/Resources/French.lproj\"): Input/output error (5)",
    "rsync: readdir(\"/System/Library/CoreServices/Menu Extras/User.menu/Contents/Resources/French.lproj\"): Input/output error (5)",
    "rsync: readdir(\"/System/Library/Extensions/IOUSBFamily.kext/Contents/PlugIns/AppleUSB CDCECMData.kext/Contents/Resources/French.lproj\"): Input/output error (5)",
    "rsync: readdir(\"/System/Library/PrivateFrameworks/Assistant.framework/Versions/A/Reso urces/French.lproj\"): Input/output error (5)",
    "rsync error: some files/attrs were not transferred (see previous errors) (code 23) at /Volumes/Home/Users/bombich/Development/Bombich_Software/rsync-3.0.5pre2/main.c (1047) [sender=3.0.5pre2]"
    2) "Mac OS X Version 10.4.11 (Build 8S2167)
    2009-03-01 10:30:14 +0100
    2009-03-01 10:30:21.962 SystemUIServer[213] lang is:en
    Mar 1 11:20:32 Bs-Home-Mac /usr/sbin/ocspd: starting
    Mar 1 11:38:25 Bs-Home-Mac ntpd[191]: sendto(17.72.255.12): Bad file descriptor
    Mar 1 12:46:40 Bs-Home-Mac ntpd[191]: sendto(17.72.255.12): Bad file descriptor
    Mar 1 13:54:55 Bs-Home-Mac ntpd[191]: sendto(17.72.255.12): Bad file descriptor
    Mar 1 15:03:13 Bs-Home-Mac ntpd[191]: sendto(17.72.255.12): Bad file descriptor
    Mar 1 16:11:31 Bs-Home-Mac ntpd[191]: sendto(17.72.255.12): Bad file descriptor
    Mar 1 17:19:46 Bs-Home-Mac ntpd[191]: sendto(17.72.255.12): Bad file descriptor
    Main starting
    2009-03-01 17:47:20.393 Skype[313] SkypeApplication::init called
    2009-03-01 17:47:21.554 Skype[313] SKInitDebugLogging
    2009-03-01 17:47:46.169 Skype[313] SkypeSound::setAudioDeviceUID: cannot find audio device with UID (null), trying to use default system output device instead
    2009-03-01 17:47:47.208 Skype[313] SkypeSound::setAudioDeviceUID: cannot find audio device with UID (null), trying to use default system output device instead
    Starting the process...
    2009-03-01 17:48:36.186 Skype[313] MacVideo getDimensions rc == NO, width 0, height 0
    2009-03-01 17:48:36.367 Skype[313] MacVideo getDimensions rc == NO, width 0, height 0
    2009-03-01 17:48:37.367 Skype[313] MacVideo getDimensions rc == NO, width 0, height 0
    2009-03-01 17:48:37.369 Skype[313] MacVideo getDimensions rc == NO, width 0, height 0
    2009-03-01 17:59:16.450 Skype[313] SkypeSound::setAudioDeviceUID: cannot find audio device with UID (null), trying to use default system output device instead
    Mar 1 17:59:19 Bs-Home-Mac crashdump[355]: Skype crashed
    Mar 1 17:59:20 Bs-Home-Mac crashdump[355]: crash report written to: /Users/bernardolima/Library/Logs/CrashReporter/Skype.crash.log
    Mar 1 18:23:50 Bs-Home-Mac authexec: executing /Applications/Carbon Copy Cloner.app/Contents/Resources/helper_tool
    Mar 1 18:28:01 Bs-Home-Mac ntpd[191]: sendto(17.72.255.12): Bad file descriptor
    Main starting
    2009-03-01 18:41:03.154 Skype[393] SkypeApplication::init called
    2009-03-01 18:41:12.433 Skype[393] SKInitDebugLogging
    Mar 1 19:36:16 Bs-Home-Mac ntpd[191]: sendto(17.72.255.12): Bad file descriptor
    Mar 1 19:55:54 Bs-Home-Mac mdimportserver[480]: -[ABAddressBook sharedAddressBook] Can't ABACQUIREFILELOCK Framework/AddressBook/ABAddressBook.m:2746
    Mar 1 19:57:12 Bs-Home-Mac mdimportserver[480]: * +[NSUnarchiver unarchiveObjectWithData:]: extra data discarded
    Mar 1 19:57:12 Bs-Home-Mac mdimportserver[480]: * +[NSUnarchiver unarchiveObjectWithData:]: extra data discarded
    Mar 1 20:44:34 Bs-Home-Mac ntpd[191]: sendto(17.72.255.12): Bad file descriptor
    Mar 1 21:52:51 Bs-Home-Mac ntpd[191]: sendto(17.72.255.12): Bad file descriptor
    Mar 1 23:01:07 Bs-Home-Mac ntpd[191]: sendto(17.72.255.12): Bad file descriptor"
    Due to my lack of knowledge I can't judge how bad this is. But I can tell you that every time I read the word bad in the log it scares the s*** out me.

  • Error in running baseline update from ATG

    Hi,
    I am trying to import the content in my ATG app schemas as indexed records into Endeca.
    After making the configuration changes listed in the ATG-endeca integration guide , When I try to do baseline index from http://localhost:7003/dyn/admin/nucleus/atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin/ , following is what I get :
    PreIndexing (Duration: 0:00:00)
         /atg/endeca/index/commerce/CategoryTreeService                COMPLETE (Succeeded)
    RepositoryExport (Duration: 0:00:19)
         /atg/endeca/index/commerce/SchemaExporter      58      0      COMPLETE (Succeeded)
         /atg/endeca/index/commerce/CategoryToDimensionOutputConfig      9      0      COMPLETE (Succeeded)
         /atg/endeca/index/commerce/RepositoryTypeDimensionExporter      15      0      COMPLETE (Succeeded)
         /atg/commerce/search/ProductCatalogOutputConfig      31      0      COMPLETE (Succeeded)
    EndecaIndexing (Duration: 0:02:18)
         /atg/endeca/index/commerce/EndecaScriptService                COMPLETE (Failed)
    following is what I get in logs :
    **** info Wed Dec 19 15:09:19 IST 2012 1355909959144 /atg/endeca/index/commerce/EndecaScriptService Starting script BaselineUpdate in application ATGen
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin --- atg.repository.search.indexing.IndexingException: Starting scrip
    t BaselineUpdate of application ATGen failed
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at atg.endeca.eacclient.ScriptRunner.startScript(ScriptRunner.ja
    va:276)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at atg.endeca.eacclient.ScriptIndexable.runUpdateScript(ScriptIn
    dexable.java:307)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at atg.endeca.eacclient.ScriptIndexable.performBaselineUpdate(Sc
    riptIndexable.java:246)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at atg.endeca.index.admin.IndexingTask.doTask(IndexingTask.java:
    401)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at atg.endeca.index.admin.IndexingTask.performTask(IndexingTask.
    java:359)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at atg.endeca.index.admin.IndexingPhase$IndexingTaskJob.invoke(I
    ndexingPhase.java:469)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at atg.common.util.ThreadDispatcherThread.run(ThreadDispatcherTh
    read.java:178)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin Caused by :java.net.ConnectException: Connection refused: connect
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at org.apache.axis.AxisFault.makeFault(AxisFault.java:101)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at org.apache.axis.transport.http.HTTPSender.invoke(HTTPSender.j
    ava:154)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at org.apache.axis.strategies.InvocationStrategy.visit(Invocatio
    nStrategy.java:32)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at org.apache.axis.SimpleChain.doVisiting(SimpleChain.java:118)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at org.apache.axis.SimpleChain.invoke(SimpleChain.java:83)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at org.apache.axis.client.AxisClient.invoke(AxisClient.java:165)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at org.apache.axis.client.Call.invokeEngine(Call.java:2784)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at org.apache.axis.client.Call.invoke(Call.java:2767)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at org.apache.axis.client.Call.invoke(Call.java:2443)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at org.apache.axis.client.Call.invoke(Call.java:2366)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at org.apache.axis.client.Call.invoke(Call.java:1812)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at com.endeca.eac.client.ScriptControlPortSOAPBindingStub.startS
    cript(ScriptControlPortSOAPBindingStub.java:263)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at atg.endeca.eacclient.ScriptRunner.startScript(ScriptRunner.ja
    va:272)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at atg.endeca.eacclient.ScriptIndexable.runUpdateScript(ScriptIn
    dexable.java:307)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at atg.endeca.eacclient.ScriptIndexable.performBaselineUpdate(Sc
    riptIndexable.java:246)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at atg.endeca.index.admin.IndexingTask.doTask(IndexingTask.java:
    401)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at atg.endeca.index.admin.IndexingTask.performTask(IndexingTask.
    java:359)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at atg.endeca.index.admin.IndexingPhase$IndexingTaskJob.invoke(I
    ndexingPhase.java:469)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at atg.common.util.ThreadDispatcherThread.run(ThreadDispatcherTh
    read.java:178)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin Caused by (#2):java.net.ConnectException: Connection refused: connect
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at java.net.PlainSocketImpl.socketConnect(Native Method)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:351)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.jav
    a:213)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:200)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at java.net.Socket.connect(Socket.java:529)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAcces
    sorImpl.java:39)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMet
    hodAccessorImpl.java:25)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at java.lang.reflect.Method.invoke(Method.java:597)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at org.apache.axis.components.net.DefaultSocketFactory.create(De
    faultSocketFactory.java:153)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at org.apache.axis.components.net.DefaultSocketFactory.create(De
    faultSocketFactory.java:120)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at org.apache.axis.transport.http.HTTPSender.getSocket(HTTPSende
    r.java:191)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at org.apache.axis.transport.http.HTTPSender.writeToSocket(HTTPS
    ender.java:404)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at org.apache.axis.transport.http.HTTPSender.invoke(HTTPSender.j
    ava:138)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at org.apache.axis.strategies.InvocationStrategy.visit(Invocatio
    nStrategy.java:32)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at org.apache.axis.SimpleChain.doVisiting(SimpleChain.java:118)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at org.apache.axis.SimpleChain.invoke(SimpleChain.java:83)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at org.apache.axis.client.AxisClient.invoke(AxisClient.java:165)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at org.apache.axis.client.Call.invokeEngine(Call.java:2784)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at org.apache.axis.client.Call.invoke(Call.java:2767)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at org.apache.axis.client.Call.invoke(Call.java:2443)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at org.apache.axis.client.Call.invoke(Call.java:2366)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at org.apache.axis.client.Call.invoke(Call.java:1812)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at com.endeca.eac.client.ScriptControlPortSOAPBindingStub.startS
    cript(ScriptControlPortSOAPBindingStub.java:263)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at atg.endeca.eacclient.ScriptRunner.startScript(ScriptRunner.ja
    va:272)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at atg.endeca.eacclient.ScriptIndexable.runUpdateScript(ScriptIn
    dexable.java:307)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at atg.endeca.eacclient.ScriptIndexable.performBaselineUpdate(Sc
    riptIndexable.java:246)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at atg.endeca.index.admin.IndexingTask.doTask(IndexingTask.java:
    401)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at atg.endeca.index.admin.IndexingTask.performTask(IndexingTask.
    java:359)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at atg.endeca.index.admin.IndexingPhase$IndexingTaskJob.invoke(I
    ndexingPhase.java:469)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at atg.common.util.ThreadDispatcherThread.run(ThreadDispatcherTh
    read.java:178)
    **** Error Wed Dec 19 15:09:20 IST 2012 1355909960330 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin
    P.S. - ALL the endeca services are running.
    To add to surprise after this , I am unable to perform the baseline update from Endeca also. I get following error while running baseline update on endeca side :
    [12.19.12 15:49:23] INFO: [ITLHost] Starting shell utility 'emgr_update_set_post_forge_dims'.
    [12.19.12 15:49:32] SEVERE: Utility 'emgr_update_set_post_forge_dims' failed. Refer to utility logs in [ENDECA_CONF]/logs/shell on host ITLHost.
    Occurred while executing line 34 of valid BeanShell script:
    31| // Upload the generated dimension values to Workbench
    32| WorkbenchManager.cleanDirs();
    33| Forge.getPostForgeDimensions();
    34| WorkbenchManager.updateWsDimensions();
    35|
    36| // Upload the generated config to Workbench
    37| WorkbenchManager.updateWsConfig();
    [12.19.12 15:49:32] SEVERE: Caught an exception while invoking method 'run' on object 'BaselineUpdate'. Releasing locks.
    Caused by java.lang.reflect.InvocationTargetException
    sun.reflect.NativeMethodAccessorImpl invoke0 - null
    Caused by com.endeca.soleng.eac.toolkit.exception.AppControlException
    com.endeca.soleng.eac.toolkit.script.Script runBeanShellScript - Error executing valid BeanShell script.
    Caused by com.endeca.soleng.eac.toolkit.exception.EacComponentControlException
    com.endeca.soleng.eac.toolkit.utility.Utility run - Utility 'emgr_update_set_post_forge_dims' failed. Refer to utility logs in [ENDECA_CONF]/logs/shell on host ITLHost.
    [12.19.12 15:49:32] INFO: Released lock 'update_lock'.
    The above error persists even if I remove the application using --remove-app command and re-deploy new app with same name as previous one
    Any help/pointer would really be appreciated.
    Thanks,
    Mayank Batra

    Thanks Pankaj and Patrick.
    My endeca installation works absolutely fine.
    When i install endeca with platform services,mdex and tools and framework , I can create the application,initialize, load baseline and baseline update my application beautifully, until I do the baseline index from ATG side.
    Once I do that , I am unable to perform indexing from Endeca also.
    What I need to do is reinstall the endeca(platform services for least) to resolve this - I have been doing this for quite some time now :(
    My Endeca Workbench instance is up, running and reachable on port 8006.
    The PlatformServices\workspace\logs\shell\ATGen.emgr_update_set_post_forge_dims.log has following one liner :
    ERROR: Could not open acquire_lock.status.
    I tried running the baseline index from ATG again and following is what I get in logs this time around :
    **** info Mon Dec 24 12:57:32 IST 2012 1356334052168 /atg/endeca/index/commerce/EndecaScriptService Starting script BaselineUpdate in application ATGen
    **** info Mon Dec 24 13:05:03 IST 2012 1356334503729 /atg/endeca/index/commerce/EndecaScriptService Script BaselineUpdate for application ATGen finished with status Failed
    **** Error Mon Dec 24 13:05:03 IST 2012 1356334503843 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin --- atg.repository.search.indexing.IndexingException: Script Baselin
    eUpdate for application ATGen failed
    **** Error Mon Dec 24 13:05:03 IST 2012 1356334503843 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at atg.endeca.eacclient.ScriptRunner.waitForScript(ScriptRunner.
    java:381)
    **** Error Mon Dec 24 13:05:03 IST 2012 1356334503843 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at atg.endeca.eacclient.ScriptIndexable.runUpdateScript(ScriptIn
    dexable.java:319)
    **** Error Mon Dec 24 13:05:03 IST 2012 1356334503843 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at atg.endeca.eacclient.ScriptIndexable.performBaselineUpdate(Sc
    riptIndexable.java:246)
    **** Error Mon Dec 24 13:05:03 IST 2012 1356334503843 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at atg.endeca.index.admin.IndexingTask.doTask(IndexingTask.java:
    401)
    **** Error Mon Dec 24 13:05:03 IST 2012 1356334503843 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at atg.endeca.index.admin.IndexingTask.performTask(IndexingTask.
    java:359)
    **** Error Mon Dec 24 13:05:03 IST 2012 1356334503843 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at atg.endeca.index.admin.IndexingPhase$IndexingTaskJob.invoke(I
    ndexingPhase.java:469)
    **** Error Mon Dec 24 13:05:03 IST 2012 1356334503843 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at atg.common.util.ThreadDispatcherThread.run(ThreadDispatcherTh
    read.java:178)
    **** Error Mon Dec 24 13:05:03 IST 2012 1356334503843 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin Caused by :atg.repository.search.indexing.IndexingException: Script Base
    lineUpdate of application ATGen failed
    **** Error Mon Dec 24 13:05:03 IST 2012 1356334503843 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at atg.endeca.eacclient.ScriptRunner.waitForScript(ScriptRunner.
    java:378)
    **** Error Mon Dec 24 13:05:03 IST 2012 1356334503843 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at atg.endeca.eacclient.ScriptIndexable.runUpdateScript(ScriptIn
    dexable.java:319)
    **** Error Mon Dec 24 13:05:03 IST 2012 1356334503843 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at atg.endeca.eacclient.ScriptIndexable.performBaselineUpdate(Sc
    riptIndexable.java:246)
    **** Error Mon Dec 24 13:05:03 IST 2012 1356334503843 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at atg.endeca.index.admin.IndexingTask.doTask(IndexingTask.java:
    401)
    **** Error Mon Dec 24 13:05:03 IST 2012 1356334503843 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at atg.endeca.index.admin.IndexingTask.performTask(IndexingTask.
    java:359)
    **** Error Mon Dec 24 13:05:03 IST 2012 1356334503843 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at atg.endeca.index.admin.IndexingPhase$IndexingTaskJob.invoke(I
    ndexingPhase.java:469)
    **** Error Mon Dec 24 13:05:03 IST 2012 1356334503843 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin at atg.common.util.ThreadDispatcherThread.run(ThreadDispatcherTh
    read.java:178)
    **** Error Mon Dec 24 13:05:03 IST 2012 1356334503843 /atg/endeca/index/commerce/ProductCatalogSimpleIndexingAdmin
    Regards,
    Mayank Batra
    Edited by: Waste Ideas on Dec 23, 2012 11:37 PM
    Edited by: Waste Ideas on Dec 23, 2012 11:38 PM

  • Please help!!! Cannot run Windows Update in Win 7 Pro after changing new hard-drive. (X201s)

    Hi everyone,
    I have been using my thinkpad x201s for more than one year by now. And recently I change a new and larger hard drive for my laptop. When I tried the thinkpad rescue and recovery disc on the new hard drive, everything works fine except that I cannot run Windows Updates after the installation was finished.  The recovery DVDs were burned directly from the original image stored in the computer when the first I received my laptop last year.
    Every time when I try to run the Windows Update, it shows the following: "Windows Update cannot currently check for updates, because the service is not running, you may need to restart your computer."
    I restarted my computer later but nothing changed. Can't run Windows Updates and can't even install IE9.
    I then moved back to my old hard drive and everything is normal, no problem with Windows Updates. I also tried to create a back-up image by running win 7 Backups and Restore. Later when I changed back to the new drive and followed the instructions to restore my system from the image I created earlier in a USB drive, it worked except that still the system can't run Windows Updates program.
    I'm now getting very confused and upset, is it something to do with the software copyright? It's been a week now and I tried to Google the similar problem but no solutions work here. All I want to do is moving my current system to the new hard drive so that I can continue to use my laptop but it just failed updating every time.
    Could anyone kindly tell me what to do when changing hard drive in order to avoid the problem I'm dealing with? Sorry I'm not a pro type user and just want my laptop to work appropriately.
    My warranty has already expired so I guess the Lenovo Technical Hotline wouldn't answer my questions anymore. Please help!! Much appreciated.
    Thanks so much in advance!!

    I'm not sure this is the issue, but from your description, it seems as though all you have to do is activate the windows update service. If I were you, I'd search for "Services" in the start menu, open it and look for "Windows Update". Check in its properties that it is set to "Automatic (Delayed)", if it isn't, set it, and start the service, you should then be able to check for updates, be careful though that none of your other programs aren't stopping the service, usually these programs are there to "optimize your performance" or "boost your system",...
    Hope this helps.

  • Performance problem when running a personalization rule

    We have a serious performance problem when running a personalization rule.
    The rule is defined like this:
    Definition
    Rule Type: Content
    Content Type: LoadedData
    Name: allAnnouncements
    Description: all announcements of types: announcement, deal, new release,
    tip of the day
    If the user has the following characteristics:
    And when:
    Then display content based on:
    (CONTENT.RessourceType == announcement) or (CONTENT.RessourceType == deal)
    or (CONTENT.RessourceType == new release) or (CONTENT.RessourceType == tip
    of the week)
    and CONTENT.endDate > now
    and CONTENT.startDate <= now
    END---------------------------------
    and is invoked in a JSP page like this:
    <%String customQuery = "(CONTENT.language='en') && (CONTENT.Country='nl'
    || CONTENT.Country='*' ) && (!(CONTENT.excludeIds like '*#7#*')) &&
    (CONTENT.userType ='retailer')"%>
    <pz:contentselector
    id="cdocs"
    ruleSet="jdbc://com.beasys.commerce.axiom.reasoning.rules.RuleSheetDefinitio
    nHome/b2boost"
    rule="allAnnouncements"
    sortBy="startDate DESC"
    query="<%=customQuery%>"
    contentHome="<%=ContentHelper.DEF_DOCUMENT_MANAGER_HOME%>" />
    The customQuery is constructed at runtime from user information, and cannot
    be constructed with rules
    administration interface.
    When I turn on debugging mode, I can see that the rule is parsed and a SQL
    query is generated, with the correct parameters.
    This is the generated query (with the substitutions):
    select
    WLCS_DOCUMENT.ID,
    WLCS_DOCUMENT.DOCUMENT_SIZE,
    WLCS_DOCUMENT.VERSION,
    WLCS_DOCUMENT.AUTHOR,
    WLCS_DOCUMENT.CREATION_DATE,
    WLCS_DOCUMENT.LOCKED_BY,
    WLCS_DOCUMENT.MODIFIED_DATE,
    WLCS_DOCUMENT.MODIFIED_BY,
    WLCS_DOCUMENT.DESCRIPTION,
    WLCS_DOCUMENT.COMMENTS,
    WLCS_DOCUMENT.MIME_TYPE
    FROM
    WLCS_DOCUMENT
    WHERE
    ((((WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'RessourceType'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'announcement'
    )) OR (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'RessourceType'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'deal'
    )) OR (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'RessourceType'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'new release'
    )) OR (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = ''
    AND WLCS_DOCUMENT_METADATA.VALUE = 'tip of the week'
    )) OR (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'RessourceType'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'press release'
    AND (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'endDate'
    AND WLCS_DOCUMENT_METADATA.VALUE > '2001-10-22 15:53:14.768'
    AND (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'startDate'
    AND WLCS_DOCUMENT_METADATA.VALUE <= '2001-10-22 15:53:14.768'
    AND ((WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'language'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'en'
    AND ((WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'Country'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'nl'
    )) OR (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'Country'
    AND WLCS_DOCUMENT_METADATA.VALUE = '*'
    AND NOT (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'excludeIds'
    AND WLCS_DOCUMENT_METADATA.VALUE LIKE '%#7#%' ESCAPE '\'
    AND (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'userType'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'retailer'
    At this moment, the server makes the user wait more than 10 min for the
    query to execute.
    This is what I found out about the problem:
    1)When I run the query on an Oracle SQL client (We are using Oracle 8.1.7.0)
    , it takes 5-10 seconds.
    2)If I remove the second term of (CONTENT.Country='nl' ||
    CONTENT.Country='*' ) in the custom query,
    thus retricting to CONTENT.Country='nl', the performance is OK.
    3)There are currently more or less 130 records in the DB that have
    Country='*'
    4)When I run the page on our QA server (solaris), which is at the same time
    our Oracle server,
    the response time is OK, but if I run it on our development server (W2K),
    response time is ridiculously long.
    5)The problem happens also if I add the term (CONTENT.Country='nl' ||
    CONTENT.Country='*' )
    to the rule definition, and I remove this part from the custom query.
    Am I missing something? Am I using the personalization server correctly?
    Is this performance difference between QA and DEV due to differences in the
    OS?
    Thank you,
    Luis Muñiz

    Luis,
    I think you are working through Support on this one, so hopefully you are in good
    shape.
    For others who are seeing this same performance issue with the reference CM implementation,
    there is a patch available via Support for the 3.2 and 3.5 releases that solves
    this problem.
    This issue is being tracked internally as CR060645 for WLPS 3.2 and CR055594 for
    WLPS 3.5.
    Regards,
    PJL
    "Luis Muniz" <[email protected]> wrote:
    We have a serious performance problem when running a personalization
    rule.
    The rule is defined like this:
    Definition
    Rule Type: Content
    Content Type: LoadedData
    Name: allAnnouncements
    Description: all announcements of types: announcement, deal, new release,
    tip of the day
    If the user has the following characteristics:
    And when:
    Then display content based on:
    (CONTENT.RessourceType == announcement) or (CONTENT.RessourceType ==
    deal)
    or (CONTENT.RessourceType == new release) or (CONTENT.RessourceType ==
    tip
    of the week)
    and CONTENT.endDate > now
    and CONTENT.startDate <= now
    END---------------------------------
    and is invoked in a JSP page like this:
    <%String customQuery = "(CONTENT.language='en') && (CONTENT.Country='nl'
    || CONTENT.Country='*' ) && (!(CONTENT.excludeIds like '*#7#*')) &&
    (CONTENT.userType ='retailer')"%>
    <pz:contentselector
    id="cdocs"
    ruleSet="jdbc://com.beasys.commerce.axiom.reasoning.rules.RuleSheetDefinitio
    nHome/b2boost"
    rule="allAnnouncements"
    sortBy="startDate DESC"
    query="<%=customQuery%>"
    contentHome="<%=ContentHelper.DEF_DOCUMENT_MANAGER_HOME%>" />
    The customQuery is constructed at runtime from user information, and
    cannot
    be constructed with rules
    administration interface.
    When I turn on debugging mode, I can see that the rule is parsed and
    a SQL
    query is generated, with the correct parameters.
    This is the generated query (with the substitutions):
    select
    WLCS_DOCUMENT.ID,
    WLCS_DOCUMENT.DOCUMENT_SIZE,
    WLCS_DOCUMENT.VERSION,
    WLCS_DOCUMENT.AUTHOR,
    WLCS_DOCUMENT.CREATION_DATE,
    WLCS_DOCUMENT.LOCKED_BY,
    WLCS_DOCUMENT.MODIFIED_DATE,
    WLCS_DOCUMENT.MODIFIED_BY,
    WLCS_DOCUMENT.DESCRIPTION,
    WLCS_DOCUMENT.COMMENTS,
    WLCS_DOCUMENT.MIME_TYPE
    FROM
    WLCS_DOCUMENT
    WHERE
    ((((WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'RessourceType'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'announcement'
    )) OR (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'RessourceType'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'deal'
    )) OR (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'RessourceType'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'new release'
    )) OR (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = ''
    AND WLCS_DOCUMENT_METADATA.VALUE = 'tip of the week'
    )) OR (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'RessourceType'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'press release'
    AND (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'endDate'
    AND WLCS_DOCUMENT_METADATA.VALUE > '2001-10-22 15:53:14.768'
    AND (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'startDate'
    AND WLCS_DOCUMENT_METADATA.VALUE <= '2001-10-22 15:53:14.768'
    AND ((WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'language'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'en'
    AND ((WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'Country'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'nl'
    )) OR (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'Country'
    AND WLCS_DOCUMENT_METADATA.VALUE = '*'
    AND NOT (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'excludeIds'
    AND WLCS_DOCUMENT_METADATA.VALUE LIKE '%#7#%' ESCAPE '\'
    AND (WLCS_DOCUMENT.ID IN (
    SELECT
    WLCS_DOCUMENT_METADATA.ID
    FROM
    WLCS_DOCUMENT_METADATA
    WHERE
    WLCS_DOCUMENT_METADATA.ID = WLCS_DOCUMENT.ID
    AND WLCS_DOCUMENT_METADATA.NAME = 'userType'
    AND WLCS_DOCUMENT_METADATA.VALUE = 'retailer'
    At this moment, the server makes the user wait more than 10 min for the
    query to execute.
    This is what I found out about the problem:
    1)When I run the query on an Oracle SQL client (We are using Oracle 8.1.7.0)
    , it takes 5-10 seconds.
    2)If I remove the second term of (CONTENT.Country='nl' ||
    CONTENT.Country='*' ) in the custom query,
    thus retricting to CONTENT.Country='nl', the performance is OK.
    3)There are currently more or less 130 records in the DB that have
    Country='*'
    4)When I run the page on our QA server (solaris), which is at the same
    time
    our Oracle server,
    the response time is OK, but if I run it on our development server (W2K),
    response time is ridiculously long.
    5)The problem happens also if I add the term (CONTENT.Country='nl' ||
    CONTENT.Country='*' )
    to the rule definition, and I remove this part from the custom query.
    Am I missing something? Am I using the personalization server correctly?
    Is this performance difference between QA and DEV due to differences
    in the
    OS?
    Thank you,
    Luis Muñiz

  • Populate archive logs from production (Solaris) to new production(Linux) db

    I want to migrate my production database(Solaris) to different environment(Linux)
    After production database migration my production database(Solaris) will be up and running for 2 days.
    We are going to check applications for 2 days on temporary production(Linux) database.
    For 2 days my primary database(Solaris) will be up and running and generating archive logs. After that i want to populate those 2 days archives to temporary database(Linux).
    Question 1: How would i populate 2 days archive logs from primary database(Solaris) to new production environment(Linux)
    Is there any way/utility do apply generated archives to different database?\
    I would be thankful for any expert suggestion.
    Thanks,

    Is there any way/utility do apply generated archives to different database?\They have to be applied to the same database (identified by DBID).
    As soon as you OPEN the Linux database (and do any transactions therein), it diverges from your Solaris database. You cannot apply archivelogs of the Solaris database to the Linux database.
    Hemant K Chitale

  • Solaris hosts modify pwdLastAuthTime, Linux hosts do not

    I'm trying to debug how Linux hosts bind against my Sun Java Directory Server (6.3).
    I would eventually like to collect information on the last time someone authenticated. This isn't perfect, as there's no way to see if someones logging in using an authorized SSH key, but at least I could start to get something.
    When I log into solaris hosts with a password, this pwdLastAuthTime updates. When I log into RHEL5 hosts, it does not.
    man pam_ldap says:
    "To authenticate a user, pam_ldap attempts to bind to the directory server using the distinguished name of the user (retrieved previously)."
    So, in theory, the pwdLastAuthTime should be updating as such, no? No name caching is enabled on the machine in question.

    Are we talking about the same user in both cases? Are you able to check if the BIND operation in the access log is actually for the user and not the proxyagent (or equiv in Redhat)?

  • Running an update via command line on demand

    In my organization, we are designing a new public access system where the computers will be protected with "rollback" software so no changes are retained. Updates will be handled in a maintainance window and include Windows Updates, Anti Virus Definitions and hopefully Flash Player, Acrobat Reader and Shockwave Player. The "snapshot" of the disk is then updated.
    Is there a way of running the update on demand, I have tried running the EXE that the scheduled task calls, but that doesn't seem to do much (maybe it only runs for the system account?)
    I cannot rely on the automatic update happening to run in the maintainance window and I need to have output of when the update has completed whether there was or was not an update performed.
    Needless to say, all this needs to be able to run silently.
    Any suggestions are gratefully accepted.
    Martin

    I had looked at this before, but had hoped i could just run a command and parse a "No available updates" or "Successfully Updated" ruturn or something. If this is the only way to do it, then it looks like I will have to have an mms.cfg with settings:
    AutoUpdateDisable=0
    AutoUpdateInterval=0
    SilentAutoUpdateEnable=1
    SilentAutoUpdateVerboseLogging=1
    And if I'm reading the documents correctly, I need to then launch a web page that uses Flash to prompt the update? If so, that's easy enough, just open a web page to the Adobe page that shows when an update is successful?
    I can then monitor the log file to see what is going on.

  • Is there a way to find out if current Add-ons are compatable with the latest upgrade ver. of Firefox before running the update?

    I want to know if my current Add-ons and themes are compatable with the latest upgrade version of Firefox before running the update.

    Hello azdec.
    You can check the add-on's page at [http://addons.mozilla.org addons.mozilla.org] if it's just one or two. I believe there is one extensions in the add-ons gallery that lets you see whether your add-ons are compatible with newer versions of Firefox. But you'll have to search for them, since I don't know how it's called.
    I will remind you that if you want an add-on to be compatible with a newer version of Firefox, you need to contact its author.
    Also, the version of Firefox you are using at the moment has been discontinued and is no longer supported. Furthermore, it has known bugs and security problems. I urge you to update to the latest version of Firefox, for maximum stability, performance, security and usability. You can get it for free, as always, at [http://www.getfirefox.com getfirefox.com].

  • How to remove test run (no update) check

    Hi gurus,
    Im new to SAP HR..i wanted to perform live payroll run..and im unable to remove check in Test run (no update)..
    How to solve this...
    Thanks & regards
    Swethana

    Hi,
    You may are using PC00_M40_CALC_SIMU for live run payroll. with this t.code we can not update the live payroll. For live payroll run we need to use PC00_M40_CALC in this t.code we can remove check mark for test run. after removng test run check filed, we can the live payroll. That too in simulation mode test run field s display mode not in changeble mode then we cannot remove the test run field in the simulation payroll i.e., PC00_M40_Calc_Simu.
    PC00_M99_Calc_Simu - international simulation payroll
    PC00_M99_CALC - Internation payroll for live run.
    Here. 99 representing the country grouping. Ex: for inda - M40, Singapore- M25.
    Regards
    Devi

Maybe you are looking for

  • Itunes asking for a reset: what are the involvements

    Itunes suddenly told me that my iphone has some kind of trouble and that it has to be reset. In fact, the only choice left by Itunes was the "reset" button. Can you tell what will happen to the applications installed on the Iphone if I do a reset? Wi

  • Oracle 11.2.0.1.0 Incomplete Installation

    Enterprise manager configuration succeeded with the following- Error stating Database Control.Please execute the following command(s) 1) Set the environment varibale ORACLE_UNQNAME to Database unique name 2)U:\app\acer\product\11.2.0\dbhome_1bin\emct

  • How to disable account? need help.

    I made 4 different users account on my mac computer, now i want to disable the two, so is there any possibility to disable it? i did manage to disable the guest account but the rest no. thank you so much for those person who could help me with this,

  • Federated Portal Network

    Hi, I am trying to connect two portals in a FPN. On the consumer portal  I am getting this error msg: Producer Connection   Test Details: Checks if a producer object exists for the producer Opens a connection to the handshake URL and receives the WSD

  • Increasing size of SMS text

    Sorry for posting this here, but appear to be no forum locations for iphone's SMS... Anyone know if it is possible to increase text size used by SMS application?