Job Chain in RAC Environment

Hi,
i've a job chain defined in a RAC environment.
I need that all the steps of my chain are performed on the same instance.
How can i do this?
Now stepA run on the instance 1 and stepB run on the instance 2.
Thanks.

Again, If it is a file system write issue, use ACFS (11.2.0.x) to create a shared file system that all nodes can see. What are you using to delete/create these files? A shell script?
1) create an ACFS file system of sufficient size to handle your data
2) mkdir /some/acfs/location
3) using sqlplus create a database directory " create directory foo as '/some/acfs/location' "
4) put the file in this location
5) use UTL_FILE to delete AND create the file (see fremove() - http://docs.oracle.com/cd/E11882_01/appdev.112/e10577/u_file.htm)

Similar Messages

  • Analyze job very slow in RAC environment

    Hi,
    I have an anlyze job which runs for 3 hrs in RAC environment (9.2.0.6).
    Earlier in NON-RAC environment it used to complete in 1 hrs.
    Need help in solving this issue.
    Ajoy Kumar Thapa

    hi,
    This database is used for query purpose mainly.
    we do huge data load during weekend into one of the table.
    This table is then exchange with one of the partition of a huge partitioned table.
    after that analyze job runs on this partitioned table.
    The command we use for analyze is given below:
    DBMS_STATS.GATHER_SCHEMA_STATS
    ownname => '<owner_name>'
    ,estimate_percent => 05
    ,cascade => true
    ,degree => 4
    ,granularity=>'ALL'
    I want to know, what can be the reason, this job, which runs fine in NON-RAC environment, is taking so long in RAC enviroment.
    Any help is highly appreciated.
    Ajoy Kumar Thapa

  • DBMS_SCHEDULER behavior in a RAC environment

    What is the behavior of DBMS_SCHEDULER in a RAC environment (say 2 nodes, N1 and N2):
    Is the behavior of a DBMS_SCHEDULER job created using a connection to the RAC service name be the same as if the job were created using a connection to N1 iusing the SID of N1?
    If multiple jobs are created at N1 using the SID in the connection, will the RAC environment manage these in parallel across N1 and N2?
    Thanks.

    Hi,
    Is the behavior of a DBMS_SCHEDULER job created using
    a connection to the RAC service name be the same as
    if the job were created using a connection to N1
    iusing the SID of N1?Yes there is no difference.
    If multiple jobs are created at N1 using the SID in
    the connection, will the RAC environment manage these
    in parallel across N1 and N2?Yes. However, jobs have a slight preference to run on the instance they ran on previously (for performance reasons due to caching). So if both nodes are lightly loaded then a job will stick to running on the node it ran on the first time. However if that node gets loaded the Scheduler will start running the job on the other node (i.e. simple load-balancing) .
    Thanks,
    Ravi.

  • Genclntst fail in oracle10g RAC environment

    After installation of oracle 10gR2 RAC on two-node solaris10 environment ,we need to run $ORACLE_HOME/bin/genclntst to generate the Client Static Library(libclntst10.a) but it failed .We copy another successfully generated libclntst10.a file from a non-RAC oracle10g environment running on solaris10.
    and it seems to work correctly. is it really OK ?,is there any difference between the libclntst10.a file in a RAC environment and in a non-RAC environment ?
    thanks a lot.
    ----------------------------the error message--------------------------------------------------------
         (file /opt/lib/cobol/lib/libcobrts.so value=LOCL);
    ld: warning: global symbol `' has non-global binding:
         (file /opt/lib/cobol/lib/libcobrts.so value=LOCL);
    ld: warning: global symbol `' has non-global binding:
         (file /opt/lib/cobol/lib/libcobrts.so value=LOCL);
    ld: warning: global symbol `' has non-global binding:
         (file /opt/lib/cobol/lib/libcobrts.so value=LOCL);
    ld: warning: global symbol `' has non-global binding:
         (file /opt/lib/cobol/lib/libcobrts.so value=LOCL);
    ld: warning: global symbol `' has non-global binding:
         (file /opt/lib/cobol/lib/libcobcrtn.so value=LOCL);
    ld: warning: global symbol `' has non-global binding:
         (file /opt/lib/cobol/lib/libcobmisc.so value=LOCL);
    ld: warning: global symbol `' has non-global binding:
         (file /opt/lib/cobol/lib/libcobscreen.so.2 value=LOCL);
    Undefined               first referenced
    symbol                in file
    lms_fblang /oracle/oracle/product/10.2.0.1/lib32/libclntst10.a(lmsai.o)
    lms_pmlang /oracle/oracle/product/10.2.0.1/lib32/libclntst10.a(lmsai.o)
    ld: fatal: Symbol referencing errors. No output written to olcp
    *** Error code 1
    make: Fatal error: Command failed for target `olcp'

    Hi,
    In 11g you can use the instance_id job attribute to do this.
    In 10g you can do this using services.
    - create a service for each instance
    - create a job class pointing to each service
    - grant execute on the job classes to all job owners that need to use them
    - when creating a job point to the job class for the service for the instance it should run on
    Oracle recommends using services instead of instances since services are more flexible and can provide high-availability and fail-over in case one instance goes down.
    Hope this helps,
    Ravi.

  • Oracle Streams on a Rac Environment

    Hi
    I have some questions with respect to Setting up Streams on a rac Environment.Would appreciate a quick response as I need the answers by tommorrow.Any help would be greatly appreciated.Here are the questions
    1> Do we have to create capture process for each active instance on only 1 capture process will do?
    2> If yes then do they need to have a seperate queue for each one?
    3> How will the apply process access multiple capture process and the propogation take place?
    4> can only 2 tables in the source be replicated instead of the entire database?
    5> In case if we use a push job if both the primary and secondary go down how can we move to the third instance and use it?
    6> If the instance goes down do we have to restart the capture process once again?
    7>What is the best suited for rac - ASM/RAW FILES with respect to Streams?
    Regards
    Shweta

    Streams in 9iR2 RAC environment mines only from archive logs not online redo logs. This restriction is lifted in 10g RAC. If you choose to go thru the downstream capture route in 10g then you can only mine from archive logs in 10gR1.
    Having said the above here are my answers:
    1> Do we have to create capture process for each active instance or only 1 capture process will do?
    You can run multiple capture processes each on difference instance in RAC. Unless you have a requirement to do so, a single capture process would suffice. The in-memory queue should also be on the same instance as the capture process is running from.
    2> If yes then do they need to have a seperate queue for each one?
    YES
    3> How will the apply process access multiple capture process and the propogation take place?
    Propagation is from a source queue to the destination queue. If the destination is a single instance database, then you can direct propagations for all of your capture(s) into a single apply queue. If the destination is also RAC then you can run multiple apply processes on each node and apply changes for specific set of tables. Maintenance would be something to think about here along with what happens when one node goes down.
    4> can only 2 tables in the source be replicated instead of the entire database?
    YES. Streams is flexible to let you decide what level you want to replicate.
    5> In case if we use a push job if both the primary and secondary go down how can we move to the third instance and use it?
    In theory propagation is a push job. There are certain things you need to configure correctly. If done, then you can move the entire streams configuration to any of the surviving node(s).
    6> If the instance goes down do we have to restart the capture process once again?
    In 9iR2 you have to restart the streams processes. In 10g the streams processes automatically migrate and restart at the new "owning" instance. In both versions, Queue ownership is transferred automatically to the surviving instance.
    7> What is the best suited for rac - ASM/RAW FILES with respect to Streams?
    Streams is independent of the storage system you use. I cannot think of any correlation here.

  • Error while scheduling the Email Alert JOB chain

    Hi All,
    I have defined a job chain in CPS and when i am going to schedule it then it is giving me error message.We have taken the trial version.
    Please find the log attached below.
    11:18:31 PM:
    JCS-111004: Queue ETD.sapetd00_Queue has no ProcessServer with the required JobDefinitionType/Service/Resource for Job 932 (submitted from ETD.Z_MONI_BATCH_DP copy from 2009/12/30 18:22:23,113 Australia/Sydney) (submitted from Job Definition ETD.Z_MONI_BATCH_DP (Copy from 2009/12/30 18:22:23,113 Australia/Sydney)): Job Definition Type CSH/Service PlatformAgentService/"Empty"
    JCS-102064: Job 934 (submitted from System_Mail_Send copy from 2009/12/29 17:54:16,608 Australia/Sydney) is global but refers (via Job) to an object in an isolation group
    JCS-102064: Job 934 (submitted from System_Mail_Send copy from 2009/12/29 17:54:16,608 Australia/Sydney) is global but refers (via Chain Step) to an object in an isolation group
    JCS-102064: Job 934 (submitted from System_Mail_Send copy from 2009/12/29 17:54:16,608 Australia/Sydney) is global but refers (via Parent Job) to an object in an isolation group Show error details
    Thanks
    Rishi Abrol

    Hi
    Are you logged into the correct isolation group ?
    Ensure the process server is also assigned to the queue.
    Regards

  • Job Chaining and Quickcluster

    I always get
    Status: Failed - HOST [Macintosh.local] QuickTime file not found.
    after the first part of the job is successful.
    If I just submit with "This Computer" it works fine. Original file is ProRes 422, first job uses ProRes 422 to scale it down to 480x270. Second job compresses to h.264. I found some info on this board from 2008 saying that job chaining and quickclusters don't work together. Is that still how it is? that's really useless..
    I also found this from Jan 2009
    David M Brewer said:
    The reason the second rendering is failing is.......this has happen to me a few times until I figured it out.....make sure you set the dimension to the video in the h.264 settings, set to the same size as the Pro Rez dimensions.
    For the most part the dimensions are left blank for the second link, h.264. And don't use 100% of source. Put the physical numbers into the spaces. When you link one video to another, the second codec doesn't know the specs settings you made for the first video settings.
    Also make sure you (at least check) set the audio for the second video. I usually have the Pro Res do the audio conversion and just past-through to the second video settings. Again it can happen the the audio is disable in the h.264 settings. This has happen a few time for me........... Check and double check your settings!
    He doesn't mention anything about with or w/o Quickclusters, but I tried what he said and could not get it to work with quickclusters...
    Anyone got any new info on this?

    Studio X,
    Thanks for taking the time to run some tests and post your results.
    I'm finding the same results with converting ProRes422 to mp4, But...
    Other codecs are giving me very different results.
    I've run some random tests to try to get a grip on whats happening.
    First I was playing around with the # of instances. I've read here and on Barefeats that (at least for my model MacPro) the instances should be set to (# of processors/2), so I've been using 4 for quite a while now and thought I'd test it for myself.
    A single 5min ProRes422 1920x1080 29.97 file to h.264
    This Computer- 15:28
    2 Instances- 14:56
    3 Instances- 13:52
    4 Instances- 14:48
    5 Instances- 13:43
    6 Instances- 13:48
    7 Instances- 13:58
    In this case 5i was the fastest but not using a Quickcluster wasn't far off
    A single 2m30s ProRes422 1920x1080 29.97 file to h.264
    This Computer- 3:19
    2 Instances- 3:45
    3 Instances- 3:45
    4 Instances- 3:45
    5 Instances- 3:50
    6 Instances- 4:00
    7 Instances- 4:00
    Interesting...not using a Quickcluster is fastest
    A single 2m30s ProRes422 1920x1080 29.97 file Scaled down using original codec
    This Computer- 5:20
    4 Instances- 4:10
    5 Instances- 4:10
    7 Instances- 4:11
    A single 1m30s ProRes422 1920x1080 29.97 file to mpeg-2
    This Computer- 2:12
    5 Instances- 2:10
    When Quickclusters are faster, 4-5 instances does seem to be the sweet spot(again for my setup).
    In the mpeg-2 test, I should have used a longer clip to get a better test but it was getting late and I was just tring to get an idea of the codecs usage of my resources. I was also monitoring CPU usage with Activity Monitor in all tests.
    Now multiclip batches:
    I forgot to write down the length of the clips in this first test but it consisted of 8 ProRes 422 clips. 3 about 1m long and the rest between 13s and 30s
    8 ProRes 422 clips to mp4
    This Computer- 11:25
    4 Instances- 5:16
    Same results as Studio X
    Next tests with 5 clips(total 1m51s)
    5 ProRes 422 clips to h.264
    This Computer- 5:00
    4 Instances- 4:52
    5 ProRes 422 clips to mpeg-2
    This Computer- 2:55
    4 Instances- 3:01
    5 ProRes 422 clips to DV NTSC
    This Computer- 6:40
    4 Instances- 5:12
    5 ProRes 422 clips to Photo Jpeg
    This Computer- 2:44
    4 Instances- 2:46
    I re-ran the last test with 7 clips because of the time it took reassemble the segmented clips
    7 ProRes 422 clips to Photo Jpeg(total 3m14s)
    This Computer- 4:43
    4 Instances- 3:41
    One last test,
    A single ProRes 422 clip to Photo Jpeg(4:05;23)
    This Computer- 5:52
    4 Instances- 4:10
    Let me start off by saying it is clear that there are many factors that effect compression times such as # of clips, length of clips, and codecs, but here are some of the things I noted:
    1)Some codecs themselves seem to be "more aware" of the computers resources than others.
    When I compress to h.264 w/o a cluster it will use about 80-85% of all resources
    When I compress to h.264 with a cluster it will use about 90-95% of all resources
    When I compress to PhotoJpeg w/o a cluster it will use about 20-25% of all resources
    When I compress to PhotoJpeg with a cluster it will use about 80-85% of all resources
    2)The time it takes to reassemble clips can be quite long and could effect overall speed
    In the very last test, compressing a single file to photoJpeg using 4 instances took 4m10s. Watching Batch Monitor I noted that it took 2m0s to compress and 2m10s to reassemble.Wow...
    It would be interesting to see how the disassemble/reassemble of bigger and larger batches using clusters effect overall time. But that would take some time.
    I think the thing I will be taking with me from all of this is your workflow is your own. If you want to optimize it, you should inspect it, test it and adjust it where it needs adjusting. Now if anyone has the time and was to run similar tests with very different results I'd love to know about it...

  • How to Schedule a Job Chain to start automatically on SAP CPS.

    Hi,
    I did a job chain and i want to run automatically on sap cps Tuesday thru Saturday at 6:00 a.m., i make a calendar on sap cps with this specific options but the job chain doesn't start running.  I don't know if i need to do something more, so if someone can give a little help with this i will apreciate a lot.
    Thanks,
    Omar

    It finished ok but on the operator message i got the following message:
    Unable to resubmit this job.
    Details:
    com.redwood.scheduler.api.exception.TimeWindowExpectedOpenWindowException: CalculateNextClose should only be called on an open time window
    at com.redwood.scheduler.model.method.impl.TimeWindowMethodImpl.calculateNextCloseIntersectionInt(TimeWindowMethodImpl.java:388)
    at com.redwood.scheduler.model.method.impl.TimeWindowMethodImpl.calculateNextCloseIntersectInt(TimeWindowMethodImpl.java:249)
    at com.redwood.scheduler.model.TimeWindowImpl.calculateNextCloseIntersectInt(TimeWindowImpl.java:212)
    at com.redwood.scheduler.model.method.impl.SubmitFrameMethodImpl.calculateNextInt(SubmitFrameMethodImpl.java:178)
    at com.redwood.scheduler.model.SubmitFrameImpl.calculateNext(SubmitFrameImpl.java:176)
    at com.redwood.scheduler.model.listeners.JobStatusChangePrepareListener.resubmitSubmitFrameJob(JobStatusChangePrepareListener.java:763)
    at com.redwood.scheduler.model.listeners.JobStatusChangePrepareListener.resubmitJob(JobStatusChangePrepareListener.java:637)
    at com.redwood.scheduler.model.listeners.JobStatusChangePrepareListener.processJobToFinalState(JobStatusChangePrepareListener.java:520)
    at com.redwood.scheduler.model.listeners.JobStatusChangePrepareListener.modelModified(JobStatusChangePrepareListener.java:233)
    at com.redwood.scheduler.persistence.impl.LowLevelPersistenceImpl.informListeners(LowLevelPersistenceImpl.java:728)
    at com.redwood.scheduler.persistence.impl.LowLevelPersistenceImpl.writeDirtyObjectListRetry(LowLevelPersistenceImpl.java:207)
    at com.redwood.scheduler.persistence.impl.LowLevelPersistenceImpl.access$000(LowLevelPersistenceImpl.java:38)
    at com.redwood.scheduler.persistence.impl.LowLevelPersistenceImpl$WriteDirtyObjectListUnitOfWork.execute(LowLevelPersistenceImpl.java:79)
    at com.redwood.scheduler.persistence.impl.PersistenceUnitOfWorkManager.execute(PersistenceUnitOfWorkManager.java:34)
    at com.redwood.scheduler.persistence.impl.LowLevelPersistenceImpl.writeDirtyObjectList(LowLevelPersistenceImpl.java:102)
    at com.redwood.scheduler.cluster.persistence.ClusteredLowLevelPersistence.writeDirtyObjectList(ClusteredLowLevelPersistence.java:59)
    at com.redwood.scheduler.model.SchedulerSessionImpl.writeDirtyListLocal(SchedulerSessionImpl.java:648)
    at com.redwood.scheduler.model.SchedulerSessionImpl.persist(SchedulerSessionImpl.java:626)
    at com.redwood.scheduler.apiint.model.UnitOfWorkManager.perform(UnitOfWorkManager.java:32)
    at com.redwood.scheduler.apiint.model.UnitOfWorkManager.perform(UnitOfWorkManager.java:13)
    at com.redwood.scheduler.jobchainservice.JobChainService.childJobFinalStatus(JobChainService.java:223)
    at com.redwood.scheduler.core.processserver.ProcessServerRuntime.childJobFinalStatus(ProcessServerRuntime.java:836)
    at com.redwood.scheduler.core.processserver.ProcessServerRuntime.onMessage(ProcessServerRuntime.java:248)
    at com.redwood.scheduler.infrastructure.work.MessageEnabledWork.run(MessageEnabledWork.java:104)
    at com.redwood.scheduler.infrastructure.work.WorkerImpl.run(WorkerImpl.java:109)
    at java.lang.Thread.run(Thread.java:534)

  • How to create a wallet in oracle RAC environment

    How to create a wallet in oracle RAC environment.
    While running following command "alter system set encryption key identified by "thalesdata4";
    I am getting error message "cannot auto create wallet" or "failed to open wallet.
    Please suggest correct way to create a wallet in RAC environment.
    Thanks
    Sudhir

    hi,
    please refer for detailed explanation
    Master Note for SSL Configuration in Fusion Middleware 11g [ID 1218695.1]
    regards

  • Can we schedule steps in Job Chain to run at a particular time of the Day.

    Hi ,
    We have created a Job chain for 3 steps.our requirement is we want to step 1 to run as per the schedule of Job chain but we want  step2 to run on fri 2 gmt and step 3 to run on saturday 1 gmt.
    is ther any setting in Job chain so that we can schedule subsequent steps to run at a particular time.
    Regards
    Rajesh

    Hi,
    You can add a timewindow to the jobdefinitions that you call in step 2 and 3, to restrict the start times for these jobs to the desired time.
    Regards,
    Anton.

  • Scheduling a BI job chain in Redwood

    The problem I am having is we are trying to schedule a BI job chain via Redwood software and are not getting any response. Within Redwood, I have executed these jobs IMPORT_BW_CHAINS, IMPORT_BW_CHAIN_DEFINITION, IMPORT_BW_INFOPACKAGES using BI job chain  0fcsm_cm_10 which is defined in BI as a job chain. These jobs run to completion but nothing is moved into Redwood to schedule as you would see from a import of a CCMS job. When I run job RUN_BW_CHAIN using the same BI job chain ID I receive the below error.  Not sure what I’m missing or doing with the process to get to schedule the BI job chains with Redwood.
    ORA-06502: PL/SQL: numeric or value error
    ORA-06512: at "RSI.RSIEXEC", line 1638
    ORA-06512: at "RSI.RSIEXEC", line 1759
    ORA-06512: at "RSI.RSI_RUN_BW_CHAIN", line 21
    ORA-06512: at "RSI.RSI_RUN_BW_CHAIN", line 80
    ORA-06512: at line 1
    ORA-06512: at "SYS.DBMS_SYS_SQL", line 1200
    ORA-06512: at "SYS.DBMS_SQL", line 323
    ORA-06512: at "SYSJCS.DDL", line 1085
    ORA-06512: at "SYSJCS.DDL", line 1118
    ORA-06512: at "SYSJCS.DDL", line 1177
    ORA-06512: at line 3
    JCS-00215: in statement RSOJ_EXECUTE_JOB

    I am also seeing the same issue 
    anton the last information  you requested
    The following products are installed in the Cronacle repository:
    Product                                  Version    Status
    Cronacle for SAP solutions               7.0.3      Production 
    Cronacle Forecast Module                 7.0.3      Production 
    Cronacle Reports Module                  7.0.3      Production 
    Cronacle &module Module                  7.0.2      development
    Cronacle Mail Module                     7.0.3      Production 
    Cronacle Audit Module                    7.0.2 r2.2 Production 
    Cronacle Process Manager for Web         7.0.3      Production 
    Cronacle Module Installer                7.0.3      Production 
    Cronacle Repository                      7.0.3.34   Production 
    Cronacle Monitor Module                  7.0.3      Production

  • Instnce name in non-RAC environment

    Hi!
    In non-RAC environment V$INSTANCE.INSTANCE_NAME does not actually displays the name of the instance,that was set in INSTANCE_NAME parameter.
    It always displays DB_NAME instead.
    Is it any way to get instance_name that has service user connected to in this environment?
    LSNRCTL for 32-bit Windows: Version 10.2.0.4.0 - Production on 28-JAN-2010 09:16:25
    Copyright (c) 1991, 2007, Oracle. All rights reserved.
    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=vegas)(PORT=1524)))
    STATUS of the LISTENER
    Alias LISTENER
    Version TNSLSNR for 32-bit Windows: Version 10.2.0.4.0 - Production
    Start Date 28-JAN-2010 09:15:36
    Uptime 0 days 0 hr. 0 min. 48 sec
    Trace Level off
    Security ON: Local OS Authentication
    SNMP OFF
    Listener Parameter File D:\oracle\db\product\10.2.0\network\admin\listener.ora
    Listener Log File D:\oracle\db\product\10.2.0\network\log\listener.log
    Listening Endpoints Summary...
    (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=vegas)(PORT=1524)))
    Services Summary...
    Service "EMCOR" has 1 instance(s).
    Instance "INST0", status READY, has 1 handler(s) for this service...
    Service "EMCOR_XPT" has 1 instance(s).
    Instance "INST0", status READY, has 1 handler(s) for this service...
    Service "PLSExtProc" has 1 instance(s).
    Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
    Service "RESXDB" has 1 instance(s).
    Instance "INST0", status READY, has 1 handler(s) for this service...
    Service "SRV1" has 1 instance(s).
    Instance "INST0", status READY, has 1 handler(s) for this service...
    Service "SRV2" has 1 instance(s).
    Instance "INST0", status READY, has 1 handler(s) for this service...
    The command completed successfully
    And SQLPLUS said
    C:\Documents and Settings\oradba>sqlplus
    SQL*Plus: Release 10.2.0.4.0 - Production on Thu Jan 28 09:44:59 2010
    Copyright (c) 1982, 2007, Oracle. All Rights Reserved.
    Enter user-name: emcos@emcor_srv2
    Enter password:
    Connected to:
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    09:45:04 EMCOS@emcor_srv2 >select name from v$database;
    NAME
    EMCOR
    Elapsed: 00:00:00.00
    09:45:07 EMCOS@emcor_srv2 >select instance_name from v$instance;
    INSTANCE_NAME
    emcor
    Elapsed: 00:00:00.01
    09:45:21 EMCOS@emcor_srv2 >select service_name from v$session where sid=(select unique sid from v$mystat);
    SERVICE_NAME
    SRV2

    Hemant K Chitale wrote:
    The documentation on INSTANCE_NAME in the 10gR2 Reference says :
    "In a single-instance database system, the instance name is usually the same as the database name."
    (this after
    "In a Real Application Clusters environment, multiple instances can be associated with a single database service. Clients can override Oracle's connection load balancing by specifying a particular instance by which to connect to the database. INSTANCE_NAME specifies the unique name of this instance.")
    This would imply that setting INSTANCE_NAME in non-RAC is ignored. The usage of the word "usually" is weak.
    Hemant K ChitaleBut what do says lsnrctl - it says that it is not weak
    11:33:28 SYS@EMCOR_SRV1 >show parameter instance_name
    NAME TYPE VALUE
    instance_name                        string      INST0
    11:33:36 SYS@EMCOR_SRV1 >host lsnrctl status
    LSNRCTL for 32-bit Windows: Version 10.2.0.4.0 - Production on 28-JAN-2010 11:33:50
    Copyright (c) 1991, 2007, Oracle. All rights reserved.
    Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=vegas)(PORT=1524)))
    STATUS of the LISTENER
    Alias LISTENER
    Version TNSLSNR for 32-bit Windows: Version 10.2.0.4.0 - Production
    Start Date 28-JAN-2010 09:15:36
    Uptime 0 days 2 hr. 18 min. 14 sec
    Trace Level off
    Security ON: Local OS Authentication
    SNMP OFF
    Listener Parameter File D:\oracle\db\product\10.2.0\network\admin\listener.ora
    Listener Log File D:\oracle\db\product\10.2.0\network\log\listener.log
    Listening Endpoints Summary...
    (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=vegas)(PORT=1524)))
    Services Summary...
    Service "EMCOR" has 1 instance(s).
    Instance "INST0", status READY, has 1 handler(s) for this service...
    Service "EMCOR_XPT" has 1 instance(s).
    Instance "INST0", status READY, has 1 handler(s) for this service...
    Service "PLSExtProc" has 1 instance(s).
    Instance "PLSExtProc", status UNKNOWN, has 1 handler(s) for this service...
    Service "RESXDB" has 1 instance(s).
    Instance "INST0", status READY, has 1 handler(s) for this service...
    Service "SRV1" has 1 instance(s).
    Instance "INST0", status READY, has 1 handler(s) for this service...
    Service "SRV2" has 1 instance(s).
    Instance "INST0", status READY, has 1 handler(s) for this service...
    The command completed successfully
    11:33:50 SYS@EMCOR_SRV1 >select sys_context('USERENV','INSTANCE_NAME') from dual;
    SYS_CONTEXT('USERENV','INSTANCE_NAME')
    emcor
    Elapsed: 00:00:00.00
    11:34:42 SYS@EMCOR_SRV1 >select service_name from v$session where sid=sys_context('USERENV','SID');
    SERVICE_NAME
    SRV1
    Best regards, Sergey

  • Error in redwood job chain for Infopackage

    Hi,
    We have recently installed redwood for handling sap jobs and are able to run all the job chains with job step in abap program successfully.
    However, for APO and BW job chains we have intermediate step for executing BW infopackage where the job is getting failed with the below error:
    SAP/BW Error Message: rfc call failed 089: Job BI_BTC<infopackage_name>has not (yet ?) been started        
    The preceding abap job steps are getting executed successfully. After the infopackage step is failed the consecutive steps all fails.
    This problem is common to all job chains with infopackage.
    Any help is greatly appreciated.
    Regards,
    Sandeep.

    Hello Anton,
    We are facing the same problem: Same log error message.
    The infopackage is correctly started and ended in BW.
    Here our versions:
    Redwood Explorer 7.0.4.2 SP2
    BW :   SAP_BASIS 70016
               SAP_BW 70018
    Do you think applying SAP CPS SP3 would solve the problem?
    Or can we solve it by modifying some specific parameters?
    Thanks in advance.
    Regards;
    Mathieu

  • How to create DIR/File on a raw device in RAC environment.

    Hi all,
    I use a shell script to create DIR and File on a raw device also it creates schema and tablespaces.
    I am facing problem in creating DIR and Files on raw device.
    One more thing, can multiple tablespaces be created on a raw device.
    Thanks & regards,
    Sanjeev

    Thanks for the response. Please help me further.
    About the Script - It asks for the path for creating DIR and uses shell command to create DIR. Later same path and DIR name is used to create Oracle DIR. Now in place of absolute path raw device name is passed. The same script is also used for creating tablespaces and schema.
    There is second script that is .sql script that creates external table in the newly created schema. All this has been working fine on single instance Oracle server. we have tested many times but fails in RAC environment when we use raw device.
    Question is - If I use filesystem will the external table's flat files and Directories be accessible to all the instances.
    I have one application written in java that would be clustered and running on these oracle servers. This application would be accessing those external tables and their flat files. Will there be a problem accessing these flat files accross the instances.
    Regards,
    Sanjeev.

  • Calculating total memory in oracle RAC environment

    I have to calculate total memry in RAC environment.
    For shared and buffer pool I execute show sga.
    For UGA and PGA I execute statement that have two different values.
    This is my two different methot for calculating total memory in oracle RAC environment.
    Why I have very different value in this 2 statements on pga values?
    first stat
    with vs as
    select 'PGA: ' pid
    ,iid
    ,session_pga_memory + session_uga_memory bytes
    from (select inst_id iid
    ,(select ss.value
    from gv$sesstat ss
    where ss.sid = s.sid
    and ss.inst_id = s.inst_id
    and ss.statistic# = 20) session_pga_memory
    ,(select ss.value
    from gv$sesstat ss
    where ss.sid = s.sid
    and ss.inst_id = s.inst_id
    and ss.statistic# = 15) session_uga_memory
    from gv$session s)
    union all
    select 'SGA: ' || name pid
    ,s.inst_id iid
    ,value bytes
    from gv$sga s
    select distinct iid, pid, sum(bytes) over (partition by iid, pid) bytes from vs
    IID PID BYTES
    1 PGA: 196764792 <=====
    1 SGA: Database Buffers 318767104
    1 SGA: Fixed Size 733688
    1 SGA: Redo Buffers 811008
    1 SGA: Variable Size 335544320
    2 PGA: 77159560 <=====
    2 SGA: Database Buffers 318767104
    2 SGA: Fixed Size 733688
    2 SGA: Redo Buffers 811008
    2 SGA: Variable Size 335544320
    second stat
    with vs as
    select 'PGA: ' pid
    ,p.inst_id iid
    ,p.pga_alloc_mem bytes
    from gv$session s
    ,gv$sesstat pcur
    ,gv$process p
    where pcur.statistic# in ( 20 -- = session pga memory
    ,15 -- = session uga memory
    and s.paddr = p.addr
    and pcur.sid = s.sid
    and pcur.INST_ID = s.INST_ID
    and pcur.INST_ID = p.INST_ID
    union all
    select 'SGA: ' || name pid
    ,s.inst_id iid
    ,value bytes
    from gv$sga s
    select distinct iid, pid, sum(bytes) over (partition by iid, pid) bytes from vs
    IID PID BYTES
    1 PGA: 342558636 <=====
    1 SGA: Database Buffers 318767104
    1 SGA: Fixed Size 733688
    1 SGA: Redo Buffers 811008
    1 SGA: Variable Size 335544320
    2 PGA: 186091416 <=====
    2 SGA: Database Buffers 318767104
    2 SGA: Fixed Size 733688
    2 SGA: Redo Buffers 811008
    2 SGA: Variable Size 335544320

    I'm sorry but it is not clear to me.
    - From v$session (1th stmt) I have
    nearly 196MB of PGA mem on instance 1
    and
    nearly 77MB of PGA mem on instance 2
    - From v$process (2th stmt) I have
    nearly 342MB of PGA mem on instance 1
    and
    nearly 186MB of PGA mem on instance 2
    then...
    342+186 - 196+77 = nearly 255MB of memory allocated by oracle processes but free?
    if I want calculate the total thing of the amount of the allocated memory from Oracle...It is more correct 2th statement that query v$process...it is true?

Maybe you are looking for