Help needed to record an experiment for a running process

Hi Team,
While trying to record an experiment through Profile Running process option, we found issues in generating an experiment file.The error stated that the directory wasn't writable but we made sure all the permissions are available for the folder. The application is a C++ implementation.
Have attached the output message:
Running: /x/opt/SolarisStudio12.4-beta_mar14-linux-x86/lib/analyzer/lib/../../../bin/collect -P 16824 -o test.1.er -d /x/web/STAGE2LP14/xxxx -p on -S on
name test. is in use; changed to test.4.er
Reading xxxx
Reading ld-linux.so.2
name test. is in use; changed to test.4.er
Reading libppfaketime.so.1
Reading librt.so.1
Reading libpthread.so.0
Reading libcrypt.so.1
Reading libz.so.1
Reading libdl.so.2
Reading libkrb5.so.3
Reading libicui18n.so.36
Reading libicuuc.so.36
Reading libicudata.so.36
Reading libicuio.so.36
Reading libexpat.so.0
Reading libqpidmessaging.so.3
Reading libqpidtypes.so.1
Reading libxerces-c.so.27
Reading libstdc++.so.6
Reading libm.so.6
Reading libc.so.6
Reading libgcc_s.so.1
Reading libk5crypto.so.3
Reading libcom_err.so.2
Reading libkrb5support.so.0
Reading libkeyutils.so.1
Reading libresolv.so.2
Reading libqpidclient.so.6
Reading libuuid.so.1
Reading libselinux.so.1
Reading libqpidcommon.so.6
Reading libsepol.so.1
Reading libboost_program_options.so.2
Reading libboost_filesystem.so.2
Reading libsasl2.so.2
Reading ISO8859-1.so
Reading libcollector.so
Attached to process 16824
t@4133656384 (l@16824) stopped in __kernel_vsyscall at 0xffffe410
0xffffe410: __kernel_vsyscall+0x0010:    popl     %ebp
Process ID: 12981
dbx: The HW counter configuration could not be loaded
Elapsed Time: 85 ms
Run "collect -h" or "er_kernel -h" with no other arguments for more information on HW counters on this system.
Execution completed, exit status is 0
dbx: Creating experiment database /x/web/STAGE2LP14/xxxxxx/test.4.er (Process ID: 13736) ...dbx: Creating experiment database /x/web/STAGE2LP14/xxxxxx/test.4.er (Process ID: 13736) ...
dbx: Experiment directory not writable
Experiment aborted
error at line 16 of file 'dbxcol3wC1XU'
detaching from process 16824
Even we tried manually using the collect command the process started successfully but while terminating the process using CTRL+ENTER we got coredump error
f7f40000-f7f50000 rwxp f7f40000 00:00 0
f7f50000-f7f6b000 r-xp 00000000 fd:00 589838                             /lib/ld-2.5.so
f7f6b000-f7f6c000 r-xp 0001a000 fd:00 589838                             /lib/ld-2.5.so
f7f6c000-f7f6d000 rwxp 0001b000 fd:00 589838                             /lib/ld-2.5.so
ffbe7000-ffbfc000 rwxp 7ffffffe9000 00:00 0                              [stack]
ffffe000-fffff000 r-xp ffffe000 00:00 0
dbx: internal error: signal SIGABRT (sent by tkill)
dbx's coredump will appear in /tmp
We arent sure how to terminate the collect process manually.
/x/opt/SolarisStudio12.4-beta_mar14-linux-x86/lib/analyzer/lib/../../../bin/collect -P 16824 -o test.1.er -d /x/web/STAGE2LP14/xxxx -p on -S on
Please help us
Thanks
Sattish.

Hi Darryl,
We tried with the below mentioned option
./collect -P 24829 -o /tmp/test.9.er  But still the same error
NOTE: No J2SE[tm] was specified for checking.
    The following J2SE[tm] versions are recommended:
      J2SE[tm] 1.7.0_25 or later 1.7.0 updates (preferred)
NOTE: You can download and install the J2SE[tm] from http://www.oracle.com/technetwork/java/javase/downloads.
WARNING: Java data collection may fail: J2SE[tm] version is unsupported.
Reading atlasserv
Reading ld-linux.so.2
Reading libppfaketime.so.1
Reading librt.so.1
Reading libpthread.so.0
Reading libcrypt.so.1
Reading libz.so.1
Reading libdl.so.2
Reading libkrb5.so.3
Reading libicui18n.so.36
Reading libicuuc.so.36
Reading libicudata.so.36
Reading libicuio.so.36
Reading libexpat.so.0
Reading libqpidmessaging.so.3
Reading libqpidtypes.so.1
Reading libxerces-c.so.27
Reading libstdc++.so.6
Reading libm.so.6
Reading libc.so.6
Reading libgcc_s.so.1
Reading libk5crypto.so.3
Reading libcom_err.so.2
Reading libkrb5support.so.0
Reading libkeyutils.so.1
Reading libresolv.so.2
Reading libqpidclient.so.6
Reading libuuid.so.1
Reading libselinux.so.1
Reading libqpidcommon.so.6
Reading libsepol.so.1
Reading libboost_program_options.so.2
Reading libboost_filesystem.so.2
Reading libsasl2.so.2
Reading ISO8859-1.so
Reading libcollector.so
Attached to process 24829
t@4133668672 (l@24829) stopped in __kernel_vsyscall at 0xffffe410
0xffffe410: __kernel_vsyscall+0x0010:   popl     %ebp
dbx: The HW counter configuration could not be loaded
Run "collect -h" or "er_kernel -h" with no other arguments for more information on HW counters on this system.
dbx: Creating experiment database /tmp/test.9.er (Process ID: 7769) ...
dbx: Experiment directory not writable
Experiment aborted
error at line 15 of file 'dbxcol61PZeE'
detaching from process 24829
Could you please review
Thanks
Sattish.

Similar Messages

  • Do I need AAAA records in DNS for MPs for clients connecting via DirectAccess?

    This is my situation:
    Have had SCCM 2007 r3 installed for some time
    Have DirectAccess implemented for over 2 years
    We are in Mixed Mode
    Have always had issues with DA connected clients getting adverts from SCCM
    DA connected clients do not report heartbeat
    In troubleshooting I have added the ipv6 boundaries and followed all the articles on FW settings and DA settings.  Still no luck.
    I ran across an article that said you need to have AAAA records in DNS for you MPs.  Is that true?  and if so, how do I get them into DNS as they are not there right now.
    Any help (especially if I am on the wrong track) would be helpful.
    Thanks
    Eric

    Yes, I know this is an old post, but I’m trying to clean them up.
    No CM07 does not need a AAAA record. Honestly this is going to be a DA issue not a CM07 issue.
    Garth Jones | My blogs: Enhansoft and
    Old Blog site | Twitter:
    @GarthMJ

  • Help needed in Finding Download location for Sun One Portal 7

    Hi,
    help needed for finding download location for Sun ONE Portal 7. I tried to find in Oracle Download page ,
    http://www.oracle.com/us/sun/sun-products-map-075562.html, But unable to find.
    Please share the link for download location.
    I am totally new in Sun ONE Portal.
    Thanks,
    Edited by: 945439 on Oct 5, 2012 3:41 AM

    try edelivery.oracle.com under sun products.

  • Help needed on online theme creation for mobile phones

    Hello everybody,
    I want to create an web application which will create themes for different mobile phones. In that application end user can upload jpg/gif images of there choice and select the mobile phone make like Nokia and also the model number like 6030. After that they can create their desired theme by clicking on a button and also can download it.
    My main problem is how to convert an image into a mobile phone theme (*.thm or *.nth or *.sis).
    Can anybody give any suggestion on this matter?
    Thanks in advance.
    Tanmoy

    Hi everybody,
    My main problem is how to convert an image into a mobile phone theme (*.thm or *.nth or *.sis).
    Please give me any guideline that I can proceed.
    Help needed.
    Thanks in advance.
    Tanmoy

  • HELP: need to do version comparison for large no.of programs

    i need to do version comparison for a large set of programs(approx.4000).anybody has any technique to do it fast,plz lemme knw.

    Hi
    try using this FM
    /SDF/CMO_COMP_VERSION
    AKB_VERSION_COMPARE
    Regards
    Shiva

  • Help: how to modify a setting for a running

    solaris 10 10/05 on sUN E3000
    Q: how to modify the setting for a running zone.
    I want to add a lofs mount for /usr/local read/write.
    also I want to add access to cdrom
    so I use zonecfg to "add fs ", and stuff then commit.
    now what should I do to make effect in the zone.
    thanks.

    to rephase my question:
    once I have a zone installed, is there a way to
    modify the zone settings , such as adding a fs
    or devices, without destory the zone? thanks.

  • ORABPEL-05002 for long running process

    Hi everybody,
    My question is related with a long running process I have designed and which, after running for a couple of days, ends by reporting the ORABPEL-05002 error:
    ===============================================================
    ORABPEL-05002
    Message handle error.
    An exception occurred while attempting to process the message "com.collaxa.cube.engine.dispatch.message.instance.PerformMessage"; the exception is: Transaction was rolled back: timed out; nested exception is: java.rmi.RemoteException: No Exception - originate from:java.lang.Exception: No Exception - originate from:; nested exception is:
         java.lang.Exception: No Exception - originate from:
         at com.collaxa.cube.engine.dispatch.DispatchHelper.handleMessage(DispatchHelper.java:152)
         at com.collaxa.cube.engine.dispatch.BaseScheduledWorker.process(BaseScheduledWorker.java:70)
         at com.collaxa.cube.engine.ejb.impl.WorkerBean.onMessage(WorkerBean.java:86)
         at com.evermind.server.ejb.MessageDrivenBeanInvocation.run(MessageDrivenBeanInvocation.java:123)
         at com.evermind.server.ejb.MessageDrivenHome.onMessage(MessageDrivenHome.java:755)
         at com.evermind.server.ejb.MessageDrivenHome.run(MessageDrivenHome.java:928)
         at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:186)
         at java.lang.Thread.run(Thread.java:534)
    ===============================================================
    Looking in the Manual Recovery screen, I can see an Activity I can recover. It's about an assign Activity where I'm doing a single boolean assignation.
    Of course, together with the ORABPEL-05002 error I got also the 'Transaction was rolled back: time out' message. Note that I have modified the transaction-timeout value to 180000. The error occurs during the night, with no heavy load of the server.
    Recovering the assign activity brings back the process in the running state.
    My process pattern:
    while (1 == 1) {
    do activity;
    wait_timeout();
    So, I have the following questions:
    1. Which is cause of this error?
    2. How may I automatically recover this lost activity? RecoveryAgent?
    Any suggestion is appreciated.
    Regards,
    amo
    P.S: the full stack of error messages reported in domain.log:
    ===============================================================
    <2006-09-18 08:08:34,101> <ERROR> <SRH.collaxa.cube.engine.dispatch> <DispatchHelper::handleMessage> failed to handle message
    javax.ejb.EJBException: Transaction was rolled back: timed out; nested exception is: java.rmi.RemoteException: No Exception - originate from:java.lang.Exception: No Exception - originate from:; nested exception is:
         java.lang.Exception: No Exception - originate from:
    java.rmi.RemoteException: No Exception - originate from:java.lang.Exception: No Exception - originate from:; nested exception is:
         java.lang.Exception: No Exception - originate from:
         at com.evermind.server.ejb.EJBUtils.makeException(EJBUtils.java:873)
         at ICubeEngineLocalBean_StatelessSessionBeanWrapper0.handleWorkItem(ICubeEngineLocalBean_StatelessSessionBeanWrapper0.java:1479)
         at com.collaxa.cube.engine.dispatch.message.instance.PerformMessageHandler.handle(PerformMessageHandler.java:45)
         at com.collaxa.cube.engine.dispatch.DispatchHelper.handleMessage(DispatchHelper.java:125)
         at com.collaxa.cube.engine.dispatch.BaseScheduledWorker.process(BaseScheduledWorker.java:70)
         at com.collaxa.cube.engine.ejb.impl.WorkerBean.onMessage(WorkerBean.java:86)
         at com.evermind.server.ejb.MessageDrivenBeanInvocation.run(MessageDrivenBeanInvocation.java:123)
         at com.evermind.server.ejb.MessageDrivenHome.onMessage(MessageDrivenHome.java:755)
         at com.evermind.server.ejb.MessageDrivenHome.run(MessageDrivenHome.java:928)
         at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:186)
         at java.lang.Thread.run(Thread.java:534)
    Caused by: java.lang.Exception: No Exception - originate from:
         at com.evermind.server.ejb.EJBUtils.makeException(EJBUtils.java:871)
         ... 10 more
    javax.ejb.EJBException: Transaction was rolled back: timed out; nested exception is: java.rmi.RemoteException: No Exception - originate from:java.lang.Exception: No Exception - originate from:; nested exception is:
         java.lang.Exception: No Exception - originate from:
         at ICubeEngineLocalBean_StatelessSessionBeanWrapper0.handleWorkItem(ICubeEngineLocalBean_StatelessSessionBeanWrapper0.java:1479)
         at com.collaxa.cube.engine.dispatch.message.instance.PerformMessageHandler.handle(PerformMessageHandler.java:45)
         at com.collaxa.cube.engine.dispatch.DispatchHelper.handleMessage(DispatchHelper.java:125)
         at com.collaxa.cube.engine.dispatch.BaseScheduledWorker.process(BaseScheduledWorker.java:70)
         at com.collaxa.cube.engine.ejb.impl.WorkerBean.onMessage(WorkerBean.java:86)
         at com.evermind.server.ejb.MessageDrivenBeanInvocation.run(MessageDrivenBeanInvocation.java:123)
         at com.evermind.server.ejb.MessageDrivenHome.onMessage(MessageDrivenHome.java:755)
         at com.evermind.server.ejb.MessageDrivenHome.run(MessageDrivenHome.java:928)
         at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:186)
         at java.lang.Thread.run(Thread.java:534)
    <2006-09-18 08:08:34,129> <ERROR> <SRH.collaxa.cube.engine.dispatch> <BaseScheduledWorker::process> Failed to handle dispatch message ... exception ORABPEL-05002
    Message handle error.
    An exception occurred while attempting to process the message "com.collaxa.cube.engine.dispatch.message.instance.PerformMessage"; the exception is: Transaction was rolled back: timed out; nested exception is: java.rmi.RemoteException: No Exception - originate from:java.lang.Exception: No Exception - originate from:; nested exception is:
         java.lang.Exception: No Exception - originate from:
    ORABPEL-05002
    Message handle error.
    An exception occurred while attempting to process the message "com.collaxa.cube.engine.dispatch.message.instance.PerformMessage"; the exception is: Transaction was rolled back: timed out; nested exception is: java.rmi.RemoteException: No Exception - originate from:java.lang.Exception: No Exception - originate from:; nested exception is:
         java.lang.Exception: No Exception - originate from:
         at com.collaxa.cube.engine.dispatch.DispatchHelper.handleMessage(DispatchHelper.java:152)
         at com.collaxa.cube.engine.dispatch.BaseScheduledWorker.process(BaseScheduledWorker.java:70)
         at com.collaxa.cube.engine.ejb.impl.WorkerBean.onMessage(WorkerBean.java:86)
         at com.evermind.server.ejb.MessageDrivenBeanInvocation.run(MessageDrivenBeanInvocation.java:123)
         at com.evermind.server.ejb.MessageDrivenHome.onMessage(MessageDrivenHome.java:755)
         at com.evermind.server.ejb.MessageDrivenHome.run(MessageDrivenHome.java:928)
         at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:186)
         at java.lang.Thread.run(Thread.java:534)
    <2006-09-18 08:09:05,236> <ERROR> <SRH.collaxa.cube> <BaseCubeSessionBean::logError> Error while invoking bean "activity manager": Scope not found.
    The scope "BpSwt2.30995" has not been defined in the current instance.
    ORABPEL-02094
    Scope not found.
    The scope "BpSwt2.30995" has not been defined in the current instance.
         at com.collaxa.cube.engine.core.ScopeContext.getScope(ScopeContext.java:213)
         at com.collaxa.cube.engine.core.WorkItem.setCubeInstance(WorkItem.java:259)
         at com.collaxa.cube.engine.core.WorkItemFactory.init(WorkItemFactory.java:68)
         at com.collaxa.cube.engine.core.WorkItemFactory.create(WorkItemFactory.java:58)
         at com.collaxa.cube.engine.adaptors.common.BaseWorkItemPersistenceAdaptor.load(BaseWorkItemPersistenceAdaptor.java:147)
         at com.collaxa.cube.engine.data.WorkItemPersistenceMgr.load(WorkItemPersistenceMgr.java:75)
         at com.collaxa.cube.engine.CubeEngine.load(CubeEngine.java:5185)
         at com.collaxa.cube.engine.CubeEngine.load(CubeEngine.java:5173)
         at com.collaxa.cube.engine.CubeEngine.expireActivity(CubeEngine.java:2136)
         at com.collaxa.cube.ejb.impl.ActivityManagerBean.expireActivity(ActivityManagerBean.java:145)
         at com.collaxa.cube.ejb.impl.ActivityManagerBean.expireActivity(ActivityManagerBean.java:116)
         at IActivityManagerLocalBean_StatelessSessionBeanWrapper52.expireActivity(IActivityManagerLocalBean_StatelessSessionBeanWrapper52.java:645)
         at com.collaxa.cube.engine.dispatch.message.instance.ExpirationMessageHandler.handle(ExpirationMessageHandler.java:43)
         at com.collaxa.cube.engine.dispatch.DispatchHelper.handleMessage(DispatchHelper.java:125)
         at com.collaxa.cube.engine.dispatch.BaseScheduledWorker.process(BaseScheduledWorker.java:70)
         at com.collaxa.cube.engine.ejb.impl.WorkerBean.onMessage(WorkerBean.java:86)
         at com.evermind.server.ejb.MessageDrivenBeanInvocation.run(MessageDrivenBeanInvocation.java:123)
         at com.evermind.server.ejb.MessageDrivenHome.onMessage(MessageDrivenHome.java:755)
         at com.evermind.server.ejb.MessageDrivenHome.run(MessageDrivenHome.java:928)
         at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:186)
         at java.lang.Thread.run(Thread.java:534)
    <2006-09-18 08:09:05,274> <ERROR> <SRH.collaxa.cube.engine.dispatch> <DispatchHelper::handleMessage> failed to handle message
    ORABPEL-02094
    Scope not found.
    The scope "BpSwt2.30995" has not been defined in the current instance.
         at com.collaxa.cube.engine.core.ScopeContext.getScope(ScopeContext.java:213)
         at com.collaxa.cube.engine.core.WorkItem.setCubeInstance(WorkItem.java:259)
         at com.collaxa.cube.engine.core.WorkItemFactory.init(WorkItemFactory.java:68)
         at com.collaxa.cube.engine.core.WorkItemFactory.create(WorkItemFactory.java:58)
         at com.collaxa.cube.engine.adaptors.common.BaseWorkItemPersistenceAdaptor.load(BaseWorkItemPersistenceAdaptor.java:147)
         at com.collaxa.cube.engine.data.WorkItemPersistenceMgr.load(WorkItemPersistenceMgr.java:75)
         at com.collaxa.cube.engine.CubeEngine.load(CubeEngine.java:5185)
         at com.collaxa.cube.engine.CubeEngine.load(CubeEngine.java:5173)
         at com.collaxa.cube.engine.CubeEngine.expireActivity(CubeEngine.java:2136)
         at com.collaxa.cube.ejb.impl.ActivityManagerBean.expireActivity(ActivityManagerBean.java:145)
         at com.collaxa.cube.ejb.impl.ActivityManagerBean.expireActivity(ActivityManagerBean.java:116)
         at IActivityManagerLocalBean_StatelessSessionBeanWrapper52.expireActivity(IActivityManagerLocalBean_StatelessSessionBeanWrapper52.java:645)
         at com.collaxa.cube.engine.dispatch.message.instance.ExpirationMessageHandler.handle(ExpirationMessageHandler.java:43)
         at com.collaxa.cube.engine.dispatch.DispatchHelper.handleMessage(DispatchHelper.java:125)
         at com.collaxa.cube.engine.dispatch.BaseScheduledWorker.process(BaseScheduledWorker.java:70)
         at com.collaxa.cube.engine.ejb.impl.WorkerBean.onMessage(WorkerBean.java:86)
         at com.evermind.server.ejb.MessageDrivenBeanInvocation.run(MessageDrivenBeanInvocation.java:123)
         at com.evermind.server.ejb.MessageDrivenHome.onMessage(MessageDrivenHome.java:755)
         at com.evermind.server.ejb.MessageDrivenHome.run(MessageDrivenHome.java:928)
         at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:186)
         at java.lang.Thread.run(Thread.java:534)
    <2006-09-18 08:09:05,275> <ERROR> <SRH.collaxa.cube.engine.dispatch> <BaseScheduledWorker::process> Failed to handle dispatch message ... exception ORABPEL-05002
    Message handle error.
    An exception occurred while attempting to process the message "com.collaxa.cube.engine.dispatch.message.instance.ExpirationMessage"; the exception is: Scope not found.
    The scope "BpSwt2.30995" has not been defined in the current instance.
    ORABPEL-05002
    Message handle error.
    An exception occurred while attempting to process the message "com.collaxa.cube.engine.dispatch.message.instance.ExpirationMessage"; the exception is: Scope not found.
    The scope "BpSwt2.30995" has not been defined in the current instance.
         at com.collaxa.cube.engine.dispatch.DispatchHelper.handleMessage(DispatchHelper.java:152)
         at com.collaxa.cube.engine.dispatch.BaseScheduledWorker.process(BaseScheduledWorker.java:70)
         at com.collaxa.cube.engine.ejb.impl.WorkerBean.onMessage(WorkerBean.java:86)
         at com.evermind.server.ejb.MessageDrivenBeanInvocation.run(MessageDrivenBeanInvocation.java:123)
         at com.evermind.server.ejb.MessageDrivenHome.onMessage(MessageDrivenHome.java:755)
         at com.evermind.server.ejb.MessageDrivenHome.run(MessageDrivenHome.java:928)
         at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:186)
         at java.lang.Thread.run(Thread.java:534)
    ===============================================================

    These are the possible cause to the problem and their solutions:
    Poor performance of the dehydration database If you are using Oracle Lite as dehydration store, please switch to use Oracle 9i or 10g. If Oracle 9i/10g is already in use, check the database parameter 'process' and 'session' to make sure it can handle the expected throughput.
    OC4J has too few available connections to the dehydration database. Increase the maxConnection number of the BPELServerDataSource at the BPEL_HOME/integration/orabpel/system/appserver/oc4j/j2ee/home/config/data-sources.xml (for developer edition) or IAS_HOME/j2ee/OC4J_BPEL/config/data-sources.xml (mid-tier installation).
    Size of message is too big Two ways to deal with this problem:
    Increase the transaction timeout at PEL_HOME/integration/orabpel/system/appserver/oc4j/j2ee/home/config/server.xml (developer edition) or IAS_HOME/j2ee/OC4J_BPEL/config/server.xml (mid-tier installation)
    Decrease the auditLevel from BPELConsole -> Manage BPEL Domain -> Configurations tab. Doing so will reduce the amount of data saved to the dehydration store.
    Cheers
    Anirudh Pucha

  • ProgressIndicator and  hourglass for long running processes

    Hi all,
    Iam using Oracle ADF 10g with EJBs.
    I have a long running process for which I want to give the user an indication of its progress. The process is run upon clicking a submit button.
    I have tested the use of the combination of progressIndicator + poll according to the example of Gerger consulting( http://gergerconsulting.blogspot.com/2007/04/adf-faces-progressindicator-example-for.html) but it did not work. The problem is that my long running process hangs the page and the progress bar does not work until the process has finished.
    I have seen a similar post in this foum (Re: How to run long background jobs in ADF applications where a user complains that there is not an Oracle method for running asynchronous processes from Oracle ADF.
    I've tested to isolate my asynchronous process with the progressIndicator + poll in a single ADF page. The process is being activated from an invokeAction from the executables of the pagedef file. But again the process hangs the page and the progressIndicator does not display at all.
    So I have abandoned the idea of the progress indicator and Iam thinking of using an hourglass.
    Is there an example or guidelines of how I can do it?

    Thanks John for your reply,
    Iam still working on the progressIndicator. I have read the discussion thoroughly quite many times.
    One thing I have not yet figured out from your discussion is how you manage to create the long-running-process thread within an action method within the managed bean and make the thread still be active, while the parent action method has finished.
    Usually the managed beans attached to a .jspx pages has request scope. So a commandButton's action method (that will spawn the long running thread) will finish much earlier than the thread and it will kill the thread.
    Thanks,
    Dimitris

  • HELP -Need to record in 640x480

    I need to take Captivate 2 shots of Windows XP in 640x480
    resolution. (Note: It will not help me to resize to 640x480 after
    recording )
    XP normally allows a minimum resolution of 800x600. I've been
    told to choose the compatibility tab in preferences for the
    application startup, but Captivate doesn't have a compatibility tab
    (that I can find).
    I reset the DPI from 96 to 125 on the advanced screen of the
    Display. This gives a pretty good simulation of 640X480 - but
    Captivate won't run after I do this!
    Does anyone know a way to record a 640x480 screen with
    Captivate 2 in XP?

    Hello Kevin,
    I have to use a work around in XP to record different
    resolutions as well.
    Try this out-
    Right-Click on your Windows desktop.
    Select Properties.
    Select the Settings tab.
    Click the Advanced button (lower right side).
    Select the adapter tab.
    Click list all modes.
    You should now see a mode selection window, like the one in
    the screenshot.
    (Make sure that the refresh rate is the same as your
    monitor's - Most LCDs are 60Hz, and VGAs are usuallly 72Hz for
    640x480 resolution)
    Select the desired mode and click "Ok".
    http://i55.photobucket.com/albums/g160/wxman1995/list_all_modes.png

  • Search help needs more records than 500 or 5000

    I have a requirement to create search help on a non primary key field . So I need to work on search help exit to remove all the repeating and blank values . The problem I am facing rite now is that search help gets only 500 records by default and 5000 if we leave the restriction field blank . I need to get all the records which is approx more than 100000 and get only the unique values .
    Please let me know how I can increase the number of record more than 5000
    Regards,
    Tashi

    Hi Tashi,
    The following approach might work out in y our case by overwriting the data fetch functionality of the standard Search help
    1. Define internal table of desired structure, in the way you get the data from DB table
    2. use your custom function module / class method or select statement to get the desired data into the newly
        defined   internal table
    3. apply required sorting / filtering / formatting as per the requirements
    3. loop through the resultant data in the internal table and append the records to standard internal
        table / parameter  RECORD_TAB of the Search help exit. Please make a note that RECORD_TAB has the
        structure field STRING which needs to be populated with the correct offsetting..for example, if the first field
         is of 10 characters and second field of 5 characters, it should be populated in the following way
    loop at itab into ls_itab.
    wa_record-string+0 = ls_itab-field1.
    wa_record-string+10 = ls_itab-field2.
    wa_record-string+15 = ls_itab-field3.
    append wa_record to record_tab.
    clear wa_record.
    endloop.
    EXIT.
    Please try this approach and let us know.
    hope this helps.
    -Sajan Joseph.

  • Help needed regarding the deployment architecture for PROD env

    Dear All,
    Please help me with some clarifications regarding the deployment architecture for PROD env.
    As of now I have 2 single node 12.1.1 installations for DEV and CRP/TEST respectively.
    Shortly I will be having a PROD env of 12.1.1 with one DB node and 2 middle tier (apps) node. I need help in whether -
    1) to have a shared APPL_TOP in the SAN for the 2 apps node or to have seperate APPL_TOPs for the 2 apps node. The point is that which will be benificitial in my case from business point of view. The INST_TOPS will be node specific in any case right?
    2) Where to enable the Concurrent Managers, in the DB node or in the primary apps node or in 2 apps node both for better performance.
    12.1.1 is installed in RHEL 5.3
    Thanks and Regards

    Hi,
    Please refer to (Note: 384248.1 - Sharing The Application Tier File System in Oracle E-Business Suite Release 12).
    For enabling the CM, it depends on what resources you have on each server. I would recommend you install it on the the application tier node, and leave the database installed on one server with no application services (if possible).
    Regards,
    Hussein

  • Help Needed Configuring Post Set Up For Big Indie Feature:

    We’ve got incredible performances from an amazing cast of well-known character actors from film, TV and stage, a unique and inspiring script, and some truly beautiful, cinematic footage. And now — it’s all about putting it together…
    But I’m having trouble finding an accurate, effective and (most importantly) a DETAILED workflow accommodating the latest version of Premiere, to post a LENGHTY feature film shot on the RED Epic in 5k FF.  I’ve reviewed videos here, but in conducting edit tests, I'm encountering all manner of glitches and problems, and need to ensure that I am properly configuring our post workflow and all of the associated hardware.
    I’ll be dividing the film into project file “reels” to help keep things manageable.  As editor and DP, it's also most important for me to edit on a 4k timeline (to take advantage of reframing and stabilization of the 5k). Footage is on three Pegasus II RAIDS.
    There will be a LOT of FX work done outside of Premiere in AE (and other compositing and graphics programs).  I’m planning to finish in Resolve, outputting at 4k.
    I have the new Mac Pro with:
    - 2.7GHz 12-core Intel Xeon E5 with 30MB of L3 cache
    - 64GB (4x16GB) of 1866MHz DDR3 ECC - 4X16GB
    - 1TB PCIe-based flash storage
    - Dual AMD FirePro D700 GPUs with 6GB of GDDR5 VRAM each
    A Sonnet Echo Express III Desktop 3-Slot (with Thunderbolt upgrade) housing the following:
    - Red Rocket
    - Connection card for HP LTO5 Ultrium 3000 Sas Ext Tape Drive
    If needed, I’m open to acquiring other cards or hardware, too (possibly adding the Red Rocket-X to the mix).  And if the referral of a PAID individual with firsthand knowledge of such a setup – to at least help with the initial set up and configuration – is what I need, I’m open to that, as well.
    We truly have an amazingly powerful compilation of principal photography, and a great story to tell — I just need to overcome this major hurdle of setting up our post.  As a somewhat newbie to Premiere, I greatly appreciate any advice, pointing-me-in-the-right-direction, or suggestions of individuals to help oversee this for compensation and screen credit that anyone can offer me here...
    Many Thanks,
    Bill

    Oooh...big involved questions and many of them.
    If it helps..
    A few things I know I would do before even starting on the actual film edit
    ..is give the new system a massive shakedown.
    Thrash the hardware  and software with test footage , graphics and audio etc.
    Test the pipelines and workflows.
    Set up a BACKUP routine. DO NOT RELY ON AUTOSAVES
    Would  NOT do  O.S updates once started with a stable system.
    Me...I would not use a Dynamic Link workflow for FX.  (I would use D.Is)
    Would create a flow chart plan for the edit and post prod to avoid generation losses, efficiencies and scheduling.
    Work toward  lock downs before FX , audio, CC and Grade. (Avoid the trap of being creatively impatient for the benefit of a smooth edit experience)
    Sort this first...
    I'm encountering all manner of glitches and problems, and need to ensure that I am properly configuring our post workflow and all of the associated hardware.
    You might want to specify some of this stuff and see if you get answers here or elsewhere..
    Suggestion - can you wait for next version of PPro ? - coming very soon evidently. Start with latest version so you don't need to update midstream.  A feature takes a long time to post and some cool new  features may help. eg  Master Clip enhanced.
    Have you considered Prelude in the workflow to log and set up your project. ( Never used it myself but I would consider/investigate  it for long form)
    Good luck and enjoy the edit process.

  • Help needed in calculations in Workbook for input query

    hi friends ,
    Iam displaying an input query in the Workbook ..the format of the query is
    Account                                  Amount
    1010 Undistributed Amount     $ 2000
    1010 PC1
    1010 PC2
    1010 PC3
    The use 0f this query is to distribute the undistributed amount for below 3 profit centers .When a amount is entered for PC1,the entered amount must be reduced from the undistributed amount .Bottom line is the undistributed amount must be refreshed everytime an amount is entered for PC1,PC2,PC3 and vice versa ..
    I tried writing some excel formulas but no use .when the query is in edit mode i was not able to write the formulas to the cells.
    Please give me your valuable inputs and help me getting this function work.
    Regards,
    Ravikanth

    Hi,
    We acheive the same requirement in our project by creating one more characteristic which will be derived from the changed record .You can refer the How to guide given below.Once you get the delta record then you can redistribute the same by using the FOX formula,taking the derived characteristic into aggregation level and checking for it's value.
    http://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/a05666e5-91bf-2a10-7285-a80b7f5f7fd2&overridelayout=true
    For example: one WBS element span across three periods ,initially all with same amount ,if I change the value for one period the remaining amount should be equally distributed in remaining periods.Special characteristic value is "A" intially.Now if i change the amount for 010.2009 to 100,the remaining 500 should be equally distributed.
    HEADER LINE.
    WBS,Period,Zdelta,Amount
    WBS1 010.2009  A  200
    WBS1 011.2009  A  200
    WBS1 012.2009  A  200
    Now when I change the amount from 200 to 100 the negative delta will be posted to ZDELTA as"X" with the value -100.So now in the cube the records will be like:-
    WBS1 010.2009  A  100
    WBS1 010.2009  X  -100
    WBS1 011.2009  A  200
    WBS1 012.2009  A  200
    Now you can write the fox formula in which you can check the total amount will be less than the original amount.So now you can post the remaining amount to the records where ZDELTA dose not have the value X.
    Hope this may work out for you.
    Regards,
    Indu

  • Help needed with a PS script for network share documentation

    I found a nice PS script that will do what I want, however the output portion seems to be broken. It will output the permissions and details, but not list what share it is referring to... Can anyone help with this?
    Thanks!
    https://gallery.technet.microsoft.com/scriptcenter/List-Share-Permissions-83f8c419#content
    <# 
               .SYNOPSIS  
               This script will list all shares on a computer, and list all the share permissions for each share. 
               .DESCRIPTION 
               The script will take a list all shares on a local or remote computer. 
               .PARAMETER Computer 
               Specifies the computer or array of computers to process 
               .INPUTS 
               Get-SharePermissions accepts pipeline of computer name(s) 
               .OUTPUTS 
               Produces an array object for each share found. 
               .EXAMPLE 
               C:\PS> .\Get-SharePermissions # Operates against local computer. 
               .EXAMPLE 
               C:\PS> 'computerName' | .\Get-SharePermissions 
               .EXAMPLE 
               C:\PS> Get-Content 'computerlist.txt' | .\Get-SharePermissions | Out-File 'SharePermissions.txt' 
               .EXAMPLE 
               Get-Help .\Get-SharePermissions -Full 
    #> 
    # Written by BigTeddy November 15, 2011 
    # Last updated 9 September 2012  
    # Ver. 2.0  
    # Thanks to Michal Gajda for input with the ACE handling. 
    [cmdletbinding()] 
    param([Parameter(ValueFromPipeline=$True, 
        ValueFromPipelineByPropertyName=$True)]$Computer = '.')  
    $shares = gwmi -Class win32_share -ComputerName $computer | select -ExpandProperty Name  
    foreach ($share in $shares) {  
        $acl = $null  
        Write-Host $share -ForegroundColor Green  
        Write-Host $('-' * $share.Length) -ForegroundColor Green  
        $objShareSec = Get-WMIObject -Class Win32_LogicalShareSecuritySetting -Filter "name='$Share'"  -ComputerName $computer 
        try {  
            $SD = $objShareSec.GetSecurityDescriptor().Descriptor    
            foreach($ace in $SD.DACL){   
                $UserName = $ace.Trustee.Name      
                If ($ace.Trustee.Domain -ne $Null) {$UserName = "$($ace.Trustee.Domain)\$UserName"}    
                If ($ace.Trustee.Name -eq $Null) {$UserName = $ace.Trustee.SIDString }      
                [Array]$ACL += New-Object Security.AccessControl.FileSystemAccessRule($UserName, $ace.AccessMask, $ace.AceType)  
                } #end foreach ACE            
            } # end try  
        catch  
            { Write-Host "Unable to obtain permissions for $share" }  
        $ACL  
        Write-Host $('=' * 50)  
        } # end foreach $share
    This is what the output looks like when ran with 'RemoteServer' | .\Get-SharePermissions.ps1 | Out-File 'sharepermissions.xls'
    FileSystemRights  : Modify, Synchronize
    AccessControlType : Allow
    IdentityReference : Everyone
    IsInherited       : False
    InheritanceFlags  : None
    PropagationFlags  : None

    Actually it is not being written only with Write-Host.  The last line of the loop is this "$ACL"  which ius an array of objects. 
    Here is a version that gets the info more easily and produces flexible objects.  It should be easier to modify into what is needed.
    # Get-ShareSec.ps1
    [cmdletbinding()]
    param(
    [Alias('ComputerName')]
    [Parameter(
    ValueFromPipelineByPropertyName=$True
    )]$Name=$env:COMPUTERNAME
    Process {
    Write-Verbose "Computer=$name"
    $shares =Get-WMiObject Win32_Share -ComputerName $name -Filter 'Type=0' -ea 0
    foreach($share in $shares){
    $sharename=$share.Name
    Write-Verbose "`tShareName=$sharename"
    $ShareSec = Get-WMIObject -Class Win32_LogicalShareSecuritySetting -Filter "name='$ShareName'" -ComputerName $name
    try {
    foreach ($ace in $ShareSec.GetSecurityDescriptor().Descriptor.DACL) {
    $props=[ordered]@{
    ComputerName=$name
    ShareName=$sharename
    TrusteeName=$ace.Trustee.Name
    TrusteeDomain=$ace.Trustee.Domain
    TrusteeSID=$ace.Trustee.SIDString
    New-Object PsObject -Property $props
    catch {
    Write-Warning ('{0} | {1} | {2}' -f $Computer,$sharename, $_)
    Get-Adcomputer -Filter * | .\Get-ShareSec.ps1 -v
    ¯\_(ツ)_/¯

  • Help needed: making a psptoolchain package for arch

    Hi....
    I'm very new to archlinux (now using it for 3 weeks), but I really like it
    Now I tried to make a archlinux package for the psptoolchain (PSP's are so nice )
    I already got a working package for the psp-gcc and the psp-binutils (wrote my own PKGBUILD's) which came with the psptoolchain.
    But now I got stuck with psp-newlib and pspsdk.
    At the moment I'm building all the packages separately, but later I'll try to make one package for the whole psptoolchain....
    So here comes my problem with psp-newlib and pspsdk...
    psp-gcc -march=i686 -O2 -pipe -G0 -Wall -I../../src/base -I../../src/kernel -c sceAtrac3plus.S
    sceAtrac3plus.S:0: error: bad value (i686) for -march
    Assembler messages:
    Error: Bad value (i686) for -march
    make[3]: *** [sceAtrac3plus.o] Error 1
    make[3]: Leaving directory `/mnt/data/archlinux/packages/psptoolchain/pspsdk/src/pspsdk/src/atrac3'
    make[2]: *** [all-recursive] Error 1
    make[2]: Leaving directory `/mnt/data/archlinux/packages/psptoolchain/pspsdk/src/pspsdk/src'
    make[1]: *** [all-recursive] Error 1
    make[1]: Leaving directory `/mnt/data/archlinux/packages/psptoolchain/pspsdk/src/pspsdk'
    make: *** [all] Error 2
    ==> ERROR: Build Failed. Aborting...
    And here comes my PKGBUILD for psp-newlib:
    # Contributor: [email protected]
    pkgname=newlib
    pkgver=1.13.0
    pkgrel=1
    pkgdesc="GCC for psp-devel"
    url="http://ps2dev.org/psp/Tools/Toolchain/"
    depends=()
    makedepends=()
    source=(ftp://sources.redhat.com/pub/newlib/$pkgname-$pkgver.tar.gz)
    md5sums=('3d07cc367a22b78c44227456b0d3b7dc')
    build() {
    export PSPDEV="/usr/local/pspdev"
    export PATH="$PATH:$PSPDEV/bin"
    cp $startdir/newlib-1.13.0.patch /$startdir/src/$pkgname-$pkgver
    cd $startdir/src/$pkgname-$pkgver
    cat newlib-1.13.0.patch | patch -p1
    cd /$startdir/src/$pkgname-$pkgver
    BUILDDIR="/tmp/pspdev" PSPDEV="/usr/local/pspdev" ./configure --prefix=/usr/local/pspdev --target=psp
    make || return 1
    make DESTDIR=$startdir/pkg install
    I know, using newlib as pkgname is bad, but later I'll change it to psp-newlib and I'll replace the $pkgname's with newlib (I hope this idea isn't too bad)
    One last thing: psp-binutils and psp-gcc are installed in "/usr/local/pspdev"
    I hope you can help me
    thanks, and have a nice day
    XazZ

    After a long break I decided to work on the PKGBUILD again.
    Now I got two PKGBUILD's: one for the psptoolchain and one for pspsdk (pspsdk gets updated very often, so I decided to create an extra PKGBUILD).
    I'm not sure if my PKGBUILD's fit the PKGBUILD-standard
    First thing before I post my PKGBUILD's: I'm not finished with adding all required fields (as license and so on)!
    Here comes the one for psptoolchain:
    pkgname=psptoolchain
    pkgver=2211
    pkgrel=1
    pkgdesc="A collection of tools to create executables for the Sony PSP"
    url="http://ps2dev.org/psp/Tools/Toolchain/"
    depends=('subversion' 'texinfo')
    makedepends=()
    license=('GPL')
    source=(ftp://ftp.gnu.org/pub/gnu/binutils/binutils-2.16.1.tar.bz2 ftp://ftp.gnu.org/pub/gnu/gcc/gcc-4.0.2/gcc-4.0.2.tar.bz2 ftp://sources.redhat.com/pub/newlib/newlib-1.15.0.tar.gz)
    md5sums=('6a9d529efb285071dad10e1f3d2b2967'
    'a659b8388cac9db2b13e056e574ceeb0'
    '4020004b1b7a56ca4cf7f6d35b40a4cb')
    sha1sums=('5c80fd5657da47efc16a63fdd93ef7395319fbbf'
    'f1b714c6398393d8f7f4ad5be933b462a95b075d'
    'f6860b36e48fb831a30bab491230bbc7ce2669a2')
    arch=('i686')
    _svntrunk=svn://svn.pspdev.org/psp/trunk/psptoolchain
    _svnmod=psptoolchain
    _svntrunk1=svn://svn.pspdev.org/psp/trunk/pspsdk
    _svnmod1=pspsdk
    build() {
    cd $startdir/src
    svn co $_svntrunk $_svnmod
    cd psptoolchain
    cp binutils-2.16.1.patch $startdir/src/binutils-2.16.1/
    cp gcc-4.0.2.patch $startdir/src/gcc-4.0.2/
    cp newlib-1.15.0.patch $startdir/src/newlib-1.15.0
    export PSPDEV="$startdir/pkg/usr/local/pspdev"
    export PATH="$PATH:$PSPDEV/bin"
    msg "patching and building binutils..."
    cd $startdir/src/binutils-2.16.1
    cat binutils-2.16.1.patch | patch -p1
    ./configure --prefix=/usr/local/pspdev --target=psp --enable-install-libbfd
    make clean || return 1
    make || return 1
    make DESTDIR=$startdir/pkg install
    msg "building and patching binutils finished"
    msg ""
    msg "patching and building gcc..."
    cd $startdir/src/gcc-4.0.2
    cat gcc-4.0.2.patch | patch -p1
    mkdir objdir
    cd $startdir/src/gcc-4.0.2/objdir
    ../configure --prefix=$startdir/pkg/usr/local/pspdev --target=psp --enable-languages="c" --with-newlib --without-headers
    make || return 1
    make DESTDIR=/ install
    msg "building and patching gcc finished"
    msg ""
    msg "building pspsdk-headers - we'll only need them temporary"
    cd $startdir/src/
    svn co $_svntrunk1 $_svnmod1
    cd $_svnmod1/
    ./bootstrap
    ./configure --prefix=/usr/local/pspdev -with-pspdev=/usr/local/pspdev
    make clean || return 1
    make DESTDIR=$startdir/pkg install-data
    msg "building pspsdk-headers finished"
    msg ""
    msg "patching and building newlib-psp"
    cd $startdir/src/newlib-1.15.0
    cat newlib-1.15.0.patch | patch -p1
    ./configure --prefix=$startdir/pkg/usr/local/pspdev --target=psp
    make || return 1
    make DESTDIR=/ install
    msg "building newlib-psp finished"
    msg ""
    msg "building gcc-c++"
    cd $startdir/src/gcc-4.0.2
    mkdir build-psp-c++
    cd $startdir/src/gcc-4.0.2/build-psp-c++
    ../configure --prefix=$startdir/pkg/usr/local/pspdev --target=psp --enable-languages="c,c++" --with-newlib --enable-cxx-flags="-G0"
    make clean || return 1
    make CFLAGS_FOR_TARGET="-G0"
    make || return 1
    make DESTDIR=/ install
    msg "building gcc-c++ finished"
    msg ""
    msg "removing unnecessary code"
    cd $startdir/pkg/usr/local/pspdev/psp
    rm -rf sdk
    msg "Now you need to build and install pspsdk!"
    And here the one for pspsdk:
    pkgname=pspsdk
    pkgver=2209
    pkgrel=1
    pkgdesc="A collection of Open Source tools and libraries written for the Sony Playstation Portable (PSP)."
    url="http://ps2dev.org/psp/Tools/Toolchain/"
    depends=('psptoolchain')
    makedepends=()
    arch=('i686')
    license="custom"
    _svntrunk=svn://svn.pspdev.org/psp/trunk/pspsdk
    _svnmod=pspsdk
    build() {
    export PSPDEV="/usr/local/pspdev"
    export PATH="$PATH:$PSPDEV/bin"
    cd $startdir/src
    svn co $_svntrunk $_svnmod
    cd $_svnmod/
    ./bootstrap
    ./configure --prefix=/usr/local/pspdev -with-pspdev=/usr/local/pspdev
    make clean || return 1
    make DESTDIR=$startdir/pkg install-data
    make clean || return 1
    ./configure --prefix=$startdir/pkg/usr/local/pspdev -with-pspdev=/usr/local/pspdev
    make || return 1
    make DESTDIR=$startdir/pkg install
    Some explanation: I've used this one very often: --prefix=$startdir/pkg/usr/local/pspdev
    Explanation: When I wanted to create only one PKGBUILD for the psptoolchain, some dependences were in $startdir/pkg. If you take  --prefix=/usr/local/pspdev he won't get some depencies (mostly header files), which means I had to create single PKGBUILD's for every part of the psptoolchain (that would be 6 single parts).
    Those --prefix-changes don't stop my psp environment from working! (tested it myself - I comiled almos all sample apps which were included in the pspsdk - and all compiled well)
    I hope we can find a better solution than mine.
    Thanks in advance
    XazZ

Maybe you are looking for